Feb 16 13:31:49 crc systemd[1]: Starting Kubernetes Kubelet... Feb 16 13:31:49 crc restorecon[4680]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 13:31:49 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 13:31:50 crc restorecon[4680]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 13:31:50 crc restorecon[4680]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 16 13:31:51 crc kubenswrapper[4812]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 13:31:51 crc kubenswrapper[4812]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 16 13:31:51 crc kubenswrapper[4812]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 13:31:51 crc kubenswrapper[4812]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 13:31:51 crc kubenswrapper[4812]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 16 13:31:51 crc kubenswrapper[4812]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.635534 4812 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641061 4812 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641096 4812 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641107 4812 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641118 4812 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641129 4812 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641139 4812 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641147 4812 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641155 4812 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641163 4812 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641172 4812 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641179 4812 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641189 4812 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641197 4812 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641205 4812 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641214 4812 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641222 4812 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641230 4812 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641238 4812 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641245 4812 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641254 4812 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641276 4812 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641285 4812 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641293 4812 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641302 4812 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641311 4812 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641320 4812 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641329 4812 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641341 4812 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641352 4812 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641364 4812 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641372 4812 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641382 4812 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641390 4812 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641399 4812 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641409 4812 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641418 4812 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641428 4812 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641464 4812 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641474 4812 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641484 4812 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641493 4812 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641502 4812 feature_gate.go:330] unrecognized feature gate: Example Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641510 4812 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641523 4812 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641533 4812 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641542 4812 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641551 4812 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641559 4812 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641570 4812 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641581 4812 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641592 4812 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641603 4812 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641615 4812 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641626 4812 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641637 4812 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641648 4812 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641657 4812 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641668 4812 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641678 4812 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641687 4812 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641698 4812 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641708 4812 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641718 4812 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641728 4812 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641735 4812 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641747 4812 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641755 4812 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641766 4812 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641776 4812 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641787 4812 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.641797 4812 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.641963 4812 flags.go:64] FLAG: --address="0.0.0.0" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.641981 4812 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.641996 4812 flags.go:64] FLAG: --anonymous-auth="true" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642009 4812 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642021 4812 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642030 4812 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642042 4812 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642054 4812 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642065 4812 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642074 4812 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642084 4812 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642094 4812 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642103 4812 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642112 4812 flags.go:64] FLAG: --cgroup-root="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642121 4812 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642130 4812 flags.go:64] FLAG: --client-ca-file="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642139 4812 flags.go:64] FLAG: --cloud-config="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642149 4812 flags.go:64] FLAG: --cloud-provider="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642158 4812 flags.go:64] FLAG: --cluster-dns="[]" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642171 4812 flags.go:64] FLAG: --cluster-domain="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642180 4812 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642190 4812 flags.go:64] FLAG: --config-dir="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642199 4812 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642209 4812 flags.go:64] FLAG: --container-log-max-files="5" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642221 4812 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642230 4812 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642240 4812 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642250 4812 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642261 4812 flags.go:64] FLAG: --contention-profiling="false" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642270 4812 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642280 4812 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642290 4812 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642299 4812 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642319 4812 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642328 4812 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642338 4812 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642348 4812 flags.go:64] FLAG: --enable-load-reader="false" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642357 4812 flags.go:64] FLAG: --enable-server="true" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642366 4812 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642377 4812 flags.go:64] FLAG: --event-burst="100" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642386 4812 flags.go:64] FLAG: --event-qps="50" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642395 4812 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642404 4812 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642415 4812 flags.go:64] FLAG: --eviction-hard="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642426 4812 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642435 4812 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642470 4812 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642481 4812 flags.go:64] FLAG: --eviction-soft="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642490 4812 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642500 4812 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642509 4812 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642518 4812 flags.go:64] FLAG: --experimental-mounter-path="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642527 4812 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642537 4812 flags.go:64] FLAG: --fail-swap-on="true" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642546 4812 flags.go:64] FLAG: --feature-gates="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642556 4812 flags.go:64] FLAG: --file-check-frequency="20s" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642565 4812 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642575 4812 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642584 4812 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642594 4812 flags.go:64] FLAG: --healthz-port="10248" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642604 4812 flags.go:64] FLAG: --help="false" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642614 4812 flags.go:64] FLAG: --hostname-override="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642624 4812 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642634 4812 flags.go:64] FLAG: --http-check-frequency="20s" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642643 4812 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642652 4812 flags.go:64] FLAG: --image-credential-provider-config="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642660 4812 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642669 4812 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642678 4812 flags.go:64] FLAG: --image-service-endpoint="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642687 4812 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642696 4812 flags.go:64] FLAG: --kube-api-burst="100" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642705 4812 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642716 4812 flags.go:64] FLAG: --kube-api-qps="50" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642727 4812 flags.go:64] FLAG: --kube-reserved="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642739 4812 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642750 4812 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642762 4812 flags.go:64] FLAG: --kubelet-cgroups="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642773 4812 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642785 4812 flags.go:64] FLAG: --lock-file="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642798 4812 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642810 4812 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642822 4812 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642840 4812 flags.go:64] FLAG: --log-json-split-stream="false" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642851 4812 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642863 4812 flags.go:64] FLAG: --log-text-split-stream="false" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642874 4812 flags.go:64] FLAG: --logging-format="text" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642885 4812 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642898 4812 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642909 4812 flags.go:64] FLAG: --manifest-url="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642921 4812 flags.go:64] FLAG: --manifest-url-header="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642937 4812 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642948 4812 flags.go:64] FLAG: --max-open-files="1000000" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642961 4812 flags.go:64] FLAG: --max-pods="110" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642972 4812 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642983 4812 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.642992 4812 flags.go:64] FLAG: --memory-manager-policy="None" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643001 4812 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643010 4812 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643019 4812 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643028 4812 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643048 4812 flags.go:64] FLAG: --node-status-max-images="50" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643056 4812 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643065 4812 flags.go:64] FLAG: --oom-score-adj="-999" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643074 4812 flags.go:64] FLAG: --pod-cidr="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643084 4812 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643099 4812 flags.go:64] FLAG: --pod-manifest-path="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643108 4812 flags.go:64] FLAG: --pod-max-pids="-1" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643117 4812 flags.go:64] FLAG: --pods-per-core="0" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643126 4812 flags.go:64] FLAG: --port="10250" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643137 4812 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643146 4812 flags.go:64] FLAG: --provider-id="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643155 4812 flags.go:64] FLAG: --qos-reserved="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643163 4812 flags.go:64] FLAG: --read-only-port="10255" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643172 4812 flags.go:64] FLAG: --register-node="true" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643183 4812 flags.go:64] FLAG: --register-schedulable="true" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643193 4812 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643209 4812 flags.go:64] FLAG: --registry-burst="10" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643218 4812 flags.go:64] FLAG: --registry-qps="5" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643227 4812 flags.go:64] FLAG: --reserved-cpus="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643237 4812 flags.go:64] FLAG: --reserved-memory="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643249 4812 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643258 4812 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643267 4812 flags.go:64] FLAG: --rotate-certificates="false" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643276 4812 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643285 4812 flags.go:64] FLAG: --runonce="false" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643294 4812 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643304 4812 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643313 4812 flags.go:64] FLAG: --seccomp-default="false" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643321 4812 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643330 4812 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643340 4812 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643349 4812 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643358 4812 flags.go:64] FLAG: --storage-driver-password="root" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643367 4812 flags.go:64] FLAG: --storage-driver-secure="false" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643376 4812 flags.go:64] FLAG: --storage-driver-table="stats" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643385 4812 flags.go:64] FLAG: --storage-driver-user="root" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643395 4812 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643405 4812 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643414 4812 flags.go:64] FLAG: --system-cgroups="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643423 4812 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643471 4812 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643482 4812 flags.go:64] FLAG: --tls-cert-file="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643491 4812 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643503 4812 flags.go:64] FLAG: --tls-min-version="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643512 4812 flags.go:64] FLAG: --tls-private-key-file="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643522 4812 flags.go:64] FLAG: --topology-manager-policy="none" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643532 4812 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643541 4812 flags.go:64] FLAG: --topology-manager-scope="container" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643551 4812 flags.go:64] FLAG: --v="2" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643590 4812 flags.go:64] FLAG: --version="false" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643603 4812 flags.go:64] FLAG: --vmodule="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643615 4812 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.643625 4812 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.643885 4812 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.643897 4812 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.643907 4812 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.643917 4812 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.643929 4812 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.643941 4812 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.643952 4812 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.643962 4812 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.643972 4812 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.643982 4812 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.643992 4812 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644001 4812 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644011 4812 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644021 4812 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644035 4812 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644048 4812 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644060 4812 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644073 4812 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644083 4812 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644092 4812 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644105 4812 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644118 4812 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644131 4812 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644142 4812 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644153 4812 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644163 4812 feature_gate.go:330] unrecognized feature gate: Example Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644173 4812 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644182 4812 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644192 4812 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644203 4812 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644214 4812 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644223 4812 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644233 4812 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644242 4812 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644255 4812 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644263 4812 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644272 4812 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644280 4812 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644287 4812 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644296 4812 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644304 4812 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644312 4812 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644320 4812 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644327 4812 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644336 4812 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644344 4812 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644351 4812 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644359 4812 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644366 4812 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644374 4812 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644382 4812 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644390 4812 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644398 4812 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644405 4812 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644413 4812 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644421 4812 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644429 4812 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644437 4812 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644478 4812 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644486 4812 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644494 4812 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644502 4812 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644510 4812 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644518 4812 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644525 4812 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644532 4812 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644541 4812 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644548 4812 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644557 4812 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644565 4812 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.644573 4812 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.644587 4812 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.656549 4812 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.656624 4812 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656708 4812 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656720 4812 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656726 4812 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656734 4812 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656745 4812 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656752 4812 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656758 4812 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656765 4812 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656772 4812 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656778 4812 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656783 4812 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656788 4812 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656795 4812 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656802 4812 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656808 4812 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656814 4812 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656820 4812 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656826 4812 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656831 4812 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656836 4812 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656841 4812 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656846 4812 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656851 4812 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656856 4812 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656861 4812 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656866 4812 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656871 4812 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656878 4812 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656883 4812 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656889 4812 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656895 4812 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656900 4812 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656906 4812 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656913 4812 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656921 4812 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656926 4812 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656931 4812 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656936 4812 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656941 4812 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656946 4812 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656951 4812 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656956 4812 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656961 4812 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656966 4812 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656971 4812 feature_gate.go:330] unrecognized feature gate: Example Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656976 4812 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656980 4812 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656986 4812 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656991 4812 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.656996 4812 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657001 4812 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657007 4812 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657012 4812 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657017 4812 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657022 4812 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657027 4812 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657032 4812 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657037 4812 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657042 4812 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657047 4812 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657052 4812 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657057 4812 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657061 4812 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657066 4812 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657071 4812 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657076 4812 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657080 4812 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657087 4812 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657093 4812 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657098 4812 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657104 4812 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.657113 4812 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657329 4812 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657340 4812 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657346 4812 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657351 4812 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657364 4812 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657369 4812 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657374 4812 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657379 4812 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657385 4812 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657390 4812 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657396 4812 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657402 4812 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657412 4812 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657418 4812 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657424 4812 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657430 4812 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657439 4812 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657467 4812 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657475 4812 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657482 4812 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657489 4812 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657496 4812 feature_gate.go:330] unrecognized feature gate: Example Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657502 4812 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657517 4812 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657523 4812 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657528 4812 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657533 4812 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657537 4812 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657542 4812 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657547 4812 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657552 4812 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657557 4812 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657561 4812 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657566 4812 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657573 4812 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657578 4812 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657583 4812 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657589 4812 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657598 4812 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657603 4812 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657609 4812 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657614 4812 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657618 4812 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657625 4812 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657631 4812 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657636 4812 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657642 4812 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657647 4812 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657652 4812 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657657 4812 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657663 4812 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657668 4812 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657673 4812 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657678 4812 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657682 4812 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657687 4812 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657692 4812 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657697 4812 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657702 4812 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657707 4812 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657712 4812 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657717 4812 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657722 4812 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657726 4812 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657733 4812 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657739 4812 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657745 4812 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657750 4812 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657755 4812 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657761 4812 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.657768 4812 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.657777 4812 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.659205 4812 server.go:940] "Client rotation is on, will bootstrap in background" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.664246 4812 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.664427 4812 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.666639 4812 server.go:997] "Starting client certificate rotation" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.666715 4812 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.667001 4812 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-20 11:02:46.631527217 +0000 UTC Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.667212 4812 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.698360 4812 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.701057 4812 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 13:31:51 crc kubenswrapper[4812]: E0216 13:31:51.701599 4812 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.252:6443: connect: connection refused" logger="UnhandledError" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.726205 4812 log.go:25] "Validated CRI v1 runtime API" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.764167 4812 log.go:25] "Validated CRI v1 image API" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.766410 4812 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.773472 4812 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-16-13-27-31-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.773511 4812 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.795187 4812 manager.go:217] Machine: {Timestamp:2026-02-16 13:31:51.792001395 +0000 UTC m=+0.856332156 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:a8093dd5-8447-4cfc-ac6f-47d191141ed0 BootID:4981c762-995f-430f-ab9d-bca26618d78a Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:fe:78:c0 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:fe:78:c0 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:f4:9f:46 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:23:ab:d1 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:ae:98:47 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:d4:e2:88 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:d2:2b:76:02:b5:b1 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:f2:6d:63:10:c4:3a Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.795487 4812 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.795673 4812 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.797076 4812 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.797347 4812 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.797395 4812 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.797650 4812 topology_manager.go:138] "Creating topology manager with none policy" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.797663 4812 container_manager_linux.go:303] "Creating device plugin manager" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.798479 4812 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.798512 4812 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.798696 4812 state_mem.go:36] "Initialized new in-memory state store" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.799279 4812 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.804025 4812 kubelet.go:418] "Attempting to sync node with API server" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.804051 4812 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.804091 4812 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.804107 4812 kubelet.go:324] "Adding apiserver pod source" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.804126 4812 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.809257 4812 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.809611 4812 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.252:6443: connect: connection refused Feb 16 13:31:51 crc kubenswrapper[4812]: E0216 13:31:51.809672 4812 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.252:6443: connect: connection refused" logger="UnhandledError" Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.810416 4812 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.252:6443: connect: connection refused Feb 16 13:31:51 crc kubenswrapper[4812]: E0216 13:31:51.810494 4812 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.252:6443: connect: connection refused" logger="UnhandledError" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.810780 4812 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.813042 4812 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.814544 4812 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.814569 4812 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.814576 4812 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.814585 4812 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.814598 4812 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.814613 4812 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.814624 4812 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.814638 4812 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.814646 4812 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.814654 4812 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.814666 4812 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.814674 4812 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.815560 4812 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.816002 4812 server.go:1280] "Started kubelet" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.817525 4812 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.252:6443: connect: connection refused Feb 16 13:31:51 crc systemd[1]: Started Kubernetes Kubelet. Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.817846 4812 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.817820 4812 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.819275 4812 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.829737 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.829783 4812 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.830257 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 14:08:39.298669951 +0000 UTC Feb 16 13:31:51 crc kubenswrapper[4812]: E0216 13:31:51.830532 4812 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.831219 4812 server.go:460] "Adding debug handlers to kubelet server" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.831238 4812 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.831263 4812 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.831528 4812 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.831626 4812 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.252:6443: connect: connection refused Feb 16 13:31:51 crc kubenswrapper[4812]: E0216 13:31:51.831675 4812 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.252:6443: connect: connection refused" logger="UnhandledError" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.832217 4812 factory.go:55] Registering systemd factory Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.832237 4812 factory.go:221] Registration of the systemd container factory successfully Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.832611 4812 factory.go:153] Registering CRI-O factory Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.832643 4812 factory.go:221] Registration of the crio container factory successfully Feb 16 13:31:51 crc kubenswrapper[4812]: E0216 13:31:51.831634 4812 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.252:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1894bd51a4a8cb60 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 13:31:51.815973728 +0000 UTC m=+0.880304429,LastTimestamp:2026-02-16 13:31:51.815973728 +0000 UTC m=+0.880304429,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.832749 4812 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.832781 4812 factory.go:103] Registering Raw factory Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.832801 4812 manager.go:1196] Started watching for new ooms in manager Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.833926 4812 manager.go:319] Starting recovery of all containers Feb 16 13:31:51 crc kubenswrapper[4812]: E0216 13:31:51.834617 4812 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.252:6443: connect: connection refused" interval="200ms" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.843909 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.843970 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.843986 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844001 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844014 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844027 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844038 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844050 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844064 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844080 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844095 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844112 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844128 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844145 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844162 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844177 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844194 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844210 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844298 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844314 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844330 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844346 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844363 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844394 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844411 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844431 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844470 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844488 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844505 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844521 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844539 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844555 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844570 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844587 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844605 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844622 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844638 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844654 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844672 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844688 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844703 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844722 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844737 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844753 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844770 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844789 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844805 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844823 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844841 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844887 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844905 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844922 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844943 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844963 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.844982 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845000 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845019 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845040 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845058 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845074 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845090 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845108 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845124 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845140 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845156 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845172 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845188 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845205 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845220 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845237 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845254 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845270 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845373 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845394 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845411 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845428 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845474 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845491 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845507 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845527 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845544 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845570 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845587 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845602 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845619 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845638 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845655 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845670 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845687 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845705 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845721 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845739 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845756 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845770 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845788 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845805 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845821 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845839 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845856 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845872 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845889 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845905 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845923 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845941 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845971 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.845990 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846009 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846027 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846047 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846064 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846082 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846099 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846115 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846136 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846153 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846170 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846187 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846204 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846221 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846239 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846256 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846273 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846290 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846307 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846322 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846340 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846356 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846371 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846386 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846402 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846418 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846434 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846477 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846495 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846511 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846527 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846543 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846561 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846577 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846593 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846609 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846629 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846646 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846663 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846675 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846691 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846705 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846718 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846732 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846751 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846767 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846784 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846800 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846819 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846835 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846851 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846867 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846883 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846898 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846915 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846929 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846945 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846960 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846974 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.846990 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.848970 4812 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.849010 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.849033 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.849051 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.849070 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.849088 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.849104 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.849121 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.849137 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.849156 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.849174 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.849190 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.849208 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.849225 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.849245 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.849298 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.849318 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.849339 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.849355 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.849374 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.849396 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.849472 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.849492 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.849513 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.849546 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.849565 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.849584 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.849602 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.849623 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.849642 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.849662 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.849680 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.849699 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.849718 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.849736 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.849756 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.849780 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.849800 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.849819 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.849837 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.849855 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.849872 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.849890 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.849908 4812 reconstruct.go:97] "Volume reconstruction finished" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.849920 4812 reconciler.go:26] "Reconciler: start to sync state" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.864871 4812 manager.go:324] Recovery completed Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.875703 4812 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.877676 4812 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.877707 4812 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.877740 4812 kubelet.go:2335] "Starting kubelet main sync loop" Feb 16 13:31:51 crc kubenswrapper[4812]: E0216 13:31:51.877793 4812 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.879071 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:31:51 crc kubenswrapper[4812]: W0216 13:31:51.879934 4812 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.252:6443: connect: connection refused Feb 16 13:31:51 crc kubenswrapper[4812]: E0216 13:31:51.880003 4812 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.252:6443: connect: connection refused" logger="UnhandledError" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.880578 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.880636 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.880655 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.884046 4812 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.884070 4812 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.884099 4812 state_mem.go:36] "Initialized new in-memory state store" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.905666 4812 policy_none.go:49] "None policy: Start" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.908070 4812 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.908093 4812 state_mem.go:35] "Initializing new in-memory state store" Feb 16 13:31:51 crc kubenswrapper[4812]: E0216 13:31:51.931517 4812 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.968901 4812 manager.go:334] "Starting Device Plugin manager" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.968952 4812 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.968967 4812 server.go:79] "Starting device plugin registration server" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.969377 4812 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.969397 4812 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.969774 4812 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.969871 4812 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.969881 4812 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.978110 4812 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.978244 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.979633 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.979678 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.979689 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.979897 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.980116 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.980183 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.980900 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.980959 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.980978 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.981142 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.981286 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.981334 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.982183 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.982206 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.982214 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.982286 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.982325 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.982348 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.982355 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.982363 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.982376 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.982749 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:31:51 crc kubenswrapper[4812]: E0216 13:31:51.982751 4812 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.983286 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.983314 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.983668 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.983696 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.983707 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.983884 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.984046 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.984080 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.984760 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.984790 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.984805 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.984906 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.984969 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.985038 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.985778 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.985864 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.985922 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.986120 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.986195 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.987875 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.987912 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:31:51 crc kubenswrapper[4812]: I0216 13:31:51.987925 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:31:52 crc kubenswrapper[4812]: E0216 13:31:52.035344 4812 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.252:6443: connect: connection refused" interval="400ms" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.051961 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.052046 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.052067 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.052085 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.052102 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.052507 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.052626 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.052808 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.052844 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.052897 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.052990 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.053048 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.053109 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.053162 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.053206 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.070297 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.071868 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.071913 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.071929 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.072150 4812 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 13:31:52 crc kubenswrapper[4812]: E0216 13:31:52.072840 4812 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.252:6443: connect: connection refused" node="crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.154278 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.154340 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.154394 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.154416 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.154436 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.154475 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.154494 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.154523 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.154544 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.154566 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.154964 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.154988 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.155011 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.155031 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.155054 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.155083 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.155108 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.155161 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.155223 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.155249 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.155266 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.155287 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.155341 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.155343 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.155377 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.155432 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.155488 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.155504 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.155517 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.155512 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.273540 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.278045 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.278129 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.278144 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.278178 4812 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 13:31:52 crc kubenswrapper[4812]: E0216 13:31:52.278833 4812 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.252:6443: connect: connection refused" node="crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.310694 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.314921 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.331605 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.340359 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.344631 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 13:31:52 crc kubenswrapper[4812]: W0216 13:31:52.374681 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-549ac4bc5d78804073674793e72a7185b3a048255961d3c9af29d3a548105438 WatchSource:0}: Error finding container 549ac4bc5d78804073674793e72a7185b3a048255961d3c9af29d3a548105438: Status 404 returned error can't find the container with id 549ac4bc5d78804073674793e72a7185b3a048255961d3c9af29d3a548105438 Feb 16 13:31:52 crc kubenswrapper[4812]: W0216 13:31:52.375247 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-3718fb73938d960550c709f54d493b5b39866b18dd7c0340eceb2c461e0c5375 WatchSource:0}: Error finding container 3718fb73938d960550c709f54d493b5b39866b18dd7c0340eceb2c461e0c5375: Status 404 returned error can't find the container with id 3718fb73938d960550c709f54d493b5b39866b18dd7c0340eceb2c461e0c5375 Feb 16 13:31:52 crc kubenswrapper[4812]: W0216 13:31:52.381974 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-e7ad22682a38cf3c83e500f782159f449c057418587b4cfcec73dd709359ee1b WatchSource:0}: Error finding container e7ad22682a38cf3c83e500f782159f449c057418587b4cfcec73dd709359ee1b: Status 404 returned error can't find the container with id e7ad22682a38cf3c83e500f782159f449c057418587b4cfcec73dd709359ee1b Feb 16 13:31:52 crc kubenswrapper[4812]: W0216 13:31:52.384151 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-3be0f6b71ad43976c1f6933c3b17943a79b3b0e1e004c140528c8508a48d9d10 WatchSource:0}: Error finding container 3be0f6b71ad43976c1f6933c3b17943a79b3b0e1e004c140528c8508a48d9d10: Status 404 returned error can't find the container with id 3be0f6b71ad43976c1f6933c3b17943a79b3b0e1e004c140528c8508a48d9d10 Feb 16 13:31:52 crc kubenswrapper[4812]: E0216 13:31:52.436290 4812 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.252:6443: connect: connection refused" interval="800ms" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.679683 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.683468 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.683511 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.683525 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.683551 4812 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 13:31:52 crc kubenswrapper[4812]: E0216 13:31:52.684163 4812 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.252:6443: connect: connection refused" node="crc" Feb 16 13:31:52 crc kubenswrapper[4812]: W0216 13:31:52.710829 4812 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.252:6443: connect: connection refused Feb 16 13:31:52 crc kubenswrapper[4812]: E0216 13:31:52.710919 4812 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.252:6443: connect: connection refused" logger="UnhandledError" Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.819796 4812 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.252:6443: connect: connection refused Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.830900 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 06:15:55.087947367 +0000 UTC Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.885319 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"e7ad22682a38cf3c83e500f782159f449c057418587b4cfcec73dd709359ee1b"} Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.907280 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"3718fb73938d960550c709f54d493b5b39866b18dd7c0340eceb2c461e0c5375"} Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.908673 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"549ac4bc5d78804073674793e72a7185b3a048255961d3c9af29d3a548105438"} Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.909904 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7401a4458dd0bf931756bbe76fc7cd92adbb3286a46c7f4b723abc4b3a0c81a8"} Feb 16 13:31:52 crc kubenswrapper[4812]: I0216 13:31:52.910936 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3be0f6b71ad43976c1f6933c3b17943a79b3b0e1e004c140528c8508a48d9d10"} Feb 16 13:31:53 crc kubenswrapper[4812]: W0216 13:31:53.026975 4812 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.252:6443: connect: connection refused Feb 16 13:31:53 crc kubenswrapper[4812]: E0216 13:31:53.027096 4812 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.252:6443: connect: connection refused" logger="UnhandledError" Feb 16 13:31:53 crc kubenswrapper[4812]: E0216 13:31:53.237692 4812 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.252:6443: connect: connection refused" interval="1.6s" Feb 16 13:31:53 crc kubenswrapper[4812]: W0216 13:31:53.351721 4812 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.252:6443: connect: connection refused Feb 16 13:31:53 crc kubenswrapper[4812]: E0216 13:31:53.351809 4812 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.252:6443: connect: connection refused" logger="UnhandledError" Feb 16 13:31:53 crc kubenswrapper[4812]: W0216 13:31:53.385632 4812 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.252:6443: connect: connection refused Feb 16 13:31:53 crc kubenswrapper[4812]: E0216 13:31:53.385753 4812 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.252:6443: connect: connection refused" logger="UnhandledError" Feb 16 13:31:53 crc kubenswrapper[4812]: I0216 13:31:53.484252 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:31:53 crc kubenswrapper[4812]: I0216 13:31:53.488613 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:31:53 crc kubenswrapper[4812]: I0216 13:31:53.488683 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:31:53 crc kubenswrapper[4812]: I0216 13:31:53.488701 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:31:53 crc kubenswrapper[4812]: I0216 13:31:53.488806 4812 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 13:31:53 crc kubenswrapper[4812]: E0216 13:31:53.489577 4812 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.252:6443: connect: connection refused" node="crc" Feb 16 13:31:53 crc kubenswrapper[4812]: I0216 13:31:53.819722 4812 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.252:6443: connect: connection refused Feb 16 13:31:53 crc kubenswrapper[4812]: I0216 13:31:53.830789 4812 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 13:31:53 crc kubenswrapper[4812]: I0216 13:31:53.831864 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 22:43:18.083974636 +0000 UTC Feb 16 13:31:53 crc kubenswrapper[4812]: E0216 13:31:53.832187 4812 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.252:6443: connect: connection refused" logger="UnhandledError" Feb 16 13:31:53 crc kubenswrapper[4812]: I0216 13:31:53.916066 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"94212f31a24fe0258d7c5241438e6ec1daf60b49bb87365fa851ae0a5628de1b"} Feb 16 13:31:53 crc kubenswrapper[4812]: I0216 13:31:53.916128 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b778bd4c0e3d5d5158269f30b9646da76918d07d5a79e274b0b70ad1b4f38623"} Feb 16 13:31:53 crc kubenswrapper[4812]: I0216 13:31:53.916151 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b1001938f74f193628f1faf5fada6a26836e7053b9f46656912df4c17ac11a1e"} Feb 16 13:31:53 crc kubenswrapper[4812]: I0216 13:31:53.916169 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f5304f578aa4297d0e9e57669cb571d178fada6972553487f8f3e6a6a2422175"} Feb 16 13:31:53 crc kubenswrapper[4812]: I0216 13:31:53.916135 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:31:53 crc kubenswrapper[4812]: I0216 13:31:53.917329 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:31:53 crc kubenswrapper[4812]: I0216 13:31:53.917371 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:31:53 crc kubenswrapper[4812]: I0216 13:31:53.917383 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:31:53 crc kubenswrapper[4812]: I0216 13:31:53.917575 4812 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea" exitCode=0 Feb 16 13:31:53 crc kubenswrapper[4812]: I0216 13:31:53.917651 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea"} Feb 16 13:31:53 crc kubenswrapper[4812]: I0216 13:31:53.917728 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:31:53 crc kubenswrapper[4812]: I0216 13:31:53.918781 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:31:53 crc kubenswrapper[4812]: I0216 13:31:53.918812 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:31:53 crc kubenswrapper[4812]: I0216 13:31:53.918823 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:31:53 crc kubenswrapper[4812]: I0216 13:31:53.919426 4812 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="fd061327369dababac46d3385196c0540cbdd19672a6a3a8a9d53b1f92175a10" exitCode=0 Feb 16 13:31:53 crc kubenswrapper[4812]: I0216 13:31:53.919492 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"fd061327369dababac46d3385196c0540cbdd19672a6a3a8a9d53b1f92175a10"} Feb 16 13:31:53 crc kubenswrapper[4812]: I0216 13:31:53.919531 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:31:53 crc kubenswrapper[4812]: I0216 13:31:53.920031 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:31:53 crc kubenswrapper[4812]: I0216 13:31:53.921280 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:31:53 crc kubenswrapper[4812]: I0216 13:31:53.921303 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:31:53 crc kubenswrapper[4812]: I0216 13:31:53.921314 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:31:53 crc kubenswrapper[4812]: I0216 13:31:53.921334 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:31:53 crc kubenswrapper[4812]: I0216 13:31:53.921369 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:31:53 crc kubenswrapper[4812]: I0216 13:31:53.921378 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:31:53 crc kubenswrapper[4812]: I0216 13:31:53.923174 4812 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="97e12743ac6a6f99e8efd74b1fcafc157b16793659394ef957ca99f5886fd478" exitCode=0 Feb 16 13:31:53 crc kubenswrapper[4812]: I0216 13:31:53.923241 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:31:53 crc kubenswrapper[4812]: I0216 13:31:53.923545 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"97e12743ac6a6f99e8efd74b1fcafc157b16793659394ef957ca99f5886fd478"} Feb 16 13:31:53 crc kubenswrapper[4812]: I0216 13:31:53.923846 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:31:53 crc kubenswrapper[4812]: I0216 13:31:53.923861 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:31:53 crc kubenswrapper[4812]: I0216 13:31:53.923869 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:31:53 crc kubenswrapper[4812]: I0216 13:31:53.925980 4812 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="0cd9201d7dd9b07b8b725890b21f0f06ee3dd3ff839e3fc6c2f4b9d4d736c01e" exitCode=0 Feb 16 13:31:53 crc kubenswrapper[4812]: I0216 13:31:53.926008 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"0cd9201d7dd9b07b8b725890b21f0f06ee3dd3ff839e3fc6c2f4b9d4d736c01e"} Feb 16 13:31:53 crc kubenswrapper[4812]: I0216 13:31:53.926061 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:31:53 crc kubenswrapper[4812]: I0216 13:31:53.926808 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:31:53 crc kubenswrapper[4812]: I0216 13:31:53.926841 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:31:53 crc kubenswrapper[4812]: I0216 13:31:53.926854 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:31:54 crc kubenswrapper[4812]: I0216 13:31:54.224229 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 13:31:54 crc kubenswrapper[4812]: I0216 13:31:54.292211 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 13:31:54 crc kubenswrapper[4812]: I0216 13:31:54.819222 4812 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.252:6443: connect: connection refused Feb 16 13:31:54 crc kubenswrapper[4812]: I0216 13:31:54.832490 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 20:28:47.64956909 +0000 UTC Feb 16 13:31:54 crc kubenswrapper[4812]: E0216 13:31:54.839204 4812 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.252:6443: connect: connection refused" interval="3.2s" Feb 16 13:31:54 crc kubenswrapper[4812]: I0216 13:31:54.871054 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 13:31:54 crc kubenswrapper[4812]: I0216 13:31:54.930467 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138"} Feb 16 13:31:54 crc kubenswrapper[4812]: I0216 13:31:54.930513 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035"} Feb 16 13:31:54 crc kubenswrapper[4812]: I0216 13:31:54.930521 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca"} Feb 16 13:31:54 crc kubenswrapper[4812]: I0216 13:31:54.932290 4812 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="8e8e48e3d5275217ab43c9e6c542e58e49cee25ac998a1c5dc0e3e21fcb2914a" exitCode=0 Feb 16 13:31:54 crc kubenswrapper[4812]: I0216 13:31:54.932354 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"8e8e48e3d5275217ab43c9e6c542e58e49cee25ac998a1c5dc0e3e21fcb2914a"} Feb 16 13:31:54 crc kubenswrapper[4812]: I0216 13:31:54.932381 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:31:54 crc kubenswrapper[4812]: I0216 13:31:54.933615 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:31:54 crc kubenswrapper[4812]: I0216 13:31:54.933636 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:31:54 crc kubenswrapper[4812]: I0216 13:31:54.933645 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:31:54 crc kubenswrapper[4812]: I0216 13:31:54.935032 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"2c7c33b5d95fa2865d325956c87e1024adf7bf0a40ef2e590b467f9cee892138"} Feb 16 13:31:54 crc kubenswrapper[4812]: I0216 13:31:54.935065 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:31:54 crc kubenswrapper[4812]: I0216 13:31:54.936032 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:31:54 crc kubenswrapper[4812]: I0216 13:31:54.936061 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:31:54 crc kubenswrapper[4812]: I0216 13:31:54.936073 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:31:54 crc kubenswrapper[4812]: I0216 13:31:54.938975 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"24f3e9624fe4d351638e9b45a1d575c06a3c9e7e12a77dcd8cb6a61996fe51fe"} Feb 16 13:31:54 crc kubenswrapper[4812]: I0216 13:31:54.939038 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"858f53f244902f66ee53409db591138aba707c545b1f7cc0da69a691be1e2138"} Feb 16 13:31:54 crc kubenswrapper[4812]: I0216 13:31:54.939056 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"85905db3e100d71dfb29420eccfd9a129be4b9a6950a8e5e2915d7f8aabcc255"} Feb 16 13:31:54 crc kubenswrapper[4812]: I0216 13:31:54.938991 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:31:54 crc kubenswrapper[4812]: I0216 13:31:54.939068 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:31:54 crc kubenswrapper[4812]: I0216 13:31:54.940355 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:31:54 crc kubenswrapper[4812]: I0216 13:31:54.940390 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:31:54 crc kubenswrapper[4812]: I0216 13:31:54.940402 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:31:54 crc kubenswrapper[4812]: I0216 13:31:54.940418 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:31:54 crc kubenswrapper[4812]: I0216 13:31:54.940485 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:31:54 crc kubenswrapper[4812]: I0216 13:31:54.940505 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:31:55 crc kubenswrapper[4812]: I0216 13:31:55.090083 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:31:55 crc kubenswrapper[4812]: I0216 13:31:55.091040 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:31:55 crc kubenswrapper[4812]: I0216 13:31:55.091143 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:31:55 crc kubenswrapper[4812]: I0216 13:31:55.091165 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:31:55 crc kubenswrapper[4812]: I0216 13:31:55.091194 4812 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 13:31:55 crc kubenswrapper[4812]: E0216 13:31:55.091827 4812 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.252:6443: connect: connection refused" node="crc" Feb 16 13:31:55 crc kubenswrapper[4812]: W0216 13:31:55.254402 4812 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.252:6443: connect: connection refused Feb 16 13:31:55 crc kubenswrapper[4812]: E0216 13:31:55.254533 4812 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.252:6443: connect: connection refused" logger="UnhandledError" Feb 16 13:31:55 crc kubenswrapper[4812]: I0216 13:31:55.820224 4812 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.252:6443: connect: connection refused Feb 16 13:31:55 crc kubenswrapper[4812]: I0216 13:31:55.832780 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 19:13:59.685650393 +0000 UTC Feb 16 13:31:55 crc kubenswrapper[4812]: W0216 13:31:55.876578 4812 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.252:6443: connect: connection refused Feb 16 13:31:55 crc kubenswrapper[4812]: E0216 13:31:55.876677 4812 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.252:6443: connect: connection refused" logger="UnhandledError" Feb 16 13:31:55 crc kubenswrapper[4812]: I0216 13:31:55.953074 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1ac4778b23c98ccc871567dac911aae65499dd17212eba145817044f6f6d19c8"} Feb 16 13:31:55 crc kubenswrapper[4812]: I0216 13:31:55.953131 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea"} Feb 16 13:31:55 crc kubenswrapper[4812]: I0216 13:31:55.953219 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:31:55 crc kubenswrapper[4812]: I0216 13:31:55.954106 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:31:55 crc kubenswrapper[4812]: I0216 13:31:55.954142 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:31:55 crc kubenswrapper[4812]: I0216 13:31:55.954154 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:31:55 crc kubenswrapper[4812]: I0216 13:31:55.955761 4812 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c86585b617171c2b91e903867bc52a9f4cdfa30f38a2c521eaf17796bd2bde77" exitCode=0 Feb 16 13:31:55 crc kubenswrapper[4812]: I0216 13:31:55.955823 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:31:55 crc kubenswrapper[4812]: I0216 13:31:55.955858 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c86585b617171c2b91e903867bc52a9f4cdfa30f38a2c521eaf17796bd2bde77"} Feb 16 13:31:55 crc kubenswrapper[4812]: I0216 13:31:55.955883 4812 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 13:31:55 crc kubenswrapper[4812]: I0216 13:31:55.955917 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:31:55 crc kubenswrapper[4812]: I0216 13:31:55.955957 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:31:55 crc kubenswrapper[4812]: I0216 13:31:55.956222 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 13:31:55 crc kubenswrapper[4812]: I0216 13:31:55.956564 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:31:55 crc kubenswrapper[4812]: I0216 13:31:55.956604 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:31:55 crc kubenswrapper[4812]: I0216 13:31:55.956617 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:31:55 crc kubenswrapper[4812]: I0216 13:31:55.956693 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:31:55 crc kubenswrapper[4812]: I0216 13:31:55.956831 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:31:55 crc kubenswrapper[4812]: I0216 13:31:55.956764 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:31:55 crc kubenswrapper[4812]: I0216 13:31:55.956854 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:31:55 crc kubenswrapper[4812]: I0216 13:31:55.956890 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:31:55 crc kubenswrapper[4812]: I0216 13:31:55.956886 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:31:55 crc kubenswrapper[4812]: I0216 13:31:55.956913 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:31:55 crc kubenswrapper[4812]: I0216 13:31:55.957742 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:31:55 crc kubenswrapper[4812]: I0216 13:31:55.957760 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:31:55 crc kubenswrapper[4812]: I0216 13:31:55.957768 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:31:56 crc kubenswrapper[4812]: W0216 13:31:56.143312 4812 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.252:6443: connect: connection refused Feb 16 13:31:56 crc kubenswrapper[4812]: E0216 13:31:56.143386 4812 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.252:6443: connect: connection refused" logger="UnhandledError" Feb 16 13:31:56 crc kubenswrapper[4812]: W0216 13:31:56.207417 4812 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.252:6443: connect: connection refused Feb 16 13:31:56 crc kubenswrapper[4812]: E0216 13:31:56.207514 4812 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.252:6443: connect: connection refused" logger="UnhandledError" Feb 16 13:31:56 crc kubenswrapper[4812]: I0216 13:31:56.273099 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 13:31:56 crc kubenswrapper[4812]: I0216 13:31:56.833434 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 19:26:05.213721036 +0000 UTC Feb 16 13:31:56 crc kubenswrapper[4812]: I0216 13:31:56.961515 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 13:31:56 crc kubenswrapper[4812]: I0216 13:31:56.964142 4812 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1ac4778b23c98ccc871567dac911aae65499dd17212eba145817044f6f6d19c8" exitCode=255 Feb 16 13:31:56 crc kubenswrapper[4812]: I0216 13:31:56.964198 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"1ac4778b23c98ccc871567dac911aae65499dd17212eba145817044f6f6d19c8"} Feb 16 13:31:56 crc kubenswrapper[4812]: I0216 13:31:56.964236 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:31:56 crc kubenswrapper[4812]: I0216 13:31:56.965696 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:31:56 crc kubenswrapper[4812]: I0216 13:31:56.965735 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:31:56 crc kubenswrapper[4812]: I0216 13:31:56.965747 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:31:56 crc kubenswrapper[4812]: I0216 13:31:56.966298 4812 scope.go:117] "RemoveContainer" containerID="1ac4778b23c98ccc871567dac911aae65499dd17212eba145817044f6f6d19c8" Feb 16 13:31:56 crc kubenswrapper[4812]: I0216 13:31:56.968176 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"49e755a103bb0664336ecbcde3ffee85aadeb7dd45124d8a4dde5df040aab960"} Feb 16 13:31:56 crc kubenswrapper[4812]: I0216 13:31:56.968206 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:31:56 crc kubenswrapper[4812]: I0216 13:31:56.968206 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ff7db97461da82f1d41669a412a218f2e87150d645100be61b2ea25efd56886b"} Feb 16 13:31:56 crc kubenswrapper[4812]: I0216 13:31:56.968279 4812 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 13:31:56 crc kubenswrapper[4812]: I0216 13:31:56.968300 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"337c70169829a1b4a93119567341aa50d1653505141250aed5163027547b4fec"} Feb 16 13:31:56 crc kubenswrapper[4812]: I0216 13:31:56.968319 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:31:56 crc kubenswrapper[4812]: I0216 13:31:56.968338 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"90a6f278338fc486b329f3aa59894a44703876c3ca31634e3e9ffa0d0ec718aa"} Feb 16 13:31:56 crc kubenswrapper[4812]: I0216 13:31:56.968416 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"bec2cb6a4b802d58024f315662d411bad0bf7e99570b5fb7c503c3d6392b1fb4"} Feb 16 13:31:56 crc kubenswrapper[4812]: I0216 13:31:56.968319 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:31:56 crc kubenswrapper[4812]: I0216 13:31:56.969063 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:31:56 crc kubenswrapper[4812]: I0216 13:31:56.969111 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:31:56 crc kubenswrapper[4812]: I0216 13:31:56.969129 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:31:56 crc kubenswrapper[4812]: I0216 13:31:56.970774 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:31:56 crc kubenswrapper[4812]: I0216 13:31:56.970970 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:31:56 crc kubenswrapper[4812]: I0216 13:31:56.970999 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:31:56 crc kubenswrapper[4812]: I0216 13:31:56.973655 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:31:56 crc kubenswrapper[4812]: I0216 13:31:56.973701 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:31:56 crc kubenswrapper[4812]: I0216 13:31:56.973712 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:31:57 crc kubenswrapper[4812]: I0216 13:31:57.834268 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 12:03:44.234973564 +0000 UTC Feb 16 13:31:57 crc kubenswrapper[4812]: I0216 13:31:57.871750 4812 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 13:31:57 crc kubenswrapper[4812]: I0216 13:31:57.871864 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 13:31:57 crc kubenswrapper[4812]: I0216 13:31:57.974639 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 13:31:57 crc kubenswrapper[4812]: I0216 13:31:57.977361 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be"} Feb 16 13:31:57 crc kubenswrapper[4812]: I0216 13:31:57.977413 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:31:57 crc kubenswrapper[4812]: I0216 13:31:57.977406 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:31:57 crc kubenswrapper[4812]: I0216 13:31:57.977522 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 13:31:57 crc kubenswrapper[4812]: I0216 13:31:57.979016 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:31:57 crc kubenswrapper[4812]: I0216 13:31:57.979039 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:31:57 crc kubenswrapper[4812]: I0216 13:31:57.979093 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:31:57 crc kubenswrapper[4812]: I0216 13:31:57.979119 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:31:57 crc kubenswrapper[4812]: I0216 13:31:57.979121 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:31:57 crc kubenswrapper[4812]: I0216 13:31:57.979205 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:31:57 crc kubenswrapper[4812]: I0216 13:31:57.991467 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 13:31:58 crc kubenswrapper[4812]: I0216 13:31:58.109134 4812 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 13:31:58 crc kubenswrapper[4812]: I0216 13:31:58.292333 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:31:58 crc kubenswrapper[4812]: I0216 13:31:58.294296 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:31:58 crc kubenswrapper[4812]: I0216 13:31:58.294347 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:31:58 crc kubenswrapper[4812]: I0216 13:31:58.294367 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:31:58 crc kubenswrapper[4812]: I0216 13:31:58.294402 4812 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 13:31:58 crc kubenswrapper[4812]: I0216 13:31:58.835062 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 06:04:39.144510712 +0000 UTC Feb 16 13:31:58 crc kubenswrapper[4812]: I0216 13:31:58.980072 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:31:58 crc kubenswrapper[4812]: I0216 13:31:58.980170 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 13:31:58 crc kubenswrapper[4812]: I0216 13:31:58.981113 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:31:58 crc kubenswrapper[4812]: I0216 13:31:58.981164 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:31:58 crc kubenswrapper[4812]: I0216 13:31:58.981183 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:31:59 crc kubenswrapper[4812]: I0216 13:31:59.000838 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 13:31:59 crc kubenswrapper[4812]: I0216 13:31:59.001004 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:31:59 crc kubenswrapper[4812]: I0216 13:31:59.002388 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:31:59 crc kubenswrapper[4812]: I0216 13:31:59.002431 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:31:59 crc kubenswrapper[4812]: I0216 13:31:59.002474 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:31:59 crc kubenswrapper[4812]: I0216 13:31:59.485874 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 16 13:31:59 crc kubenswrapper[4812]: I0216 13:31:59.486071 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:31:59 crc kubenswrapper[4812]: I0216 13:31:59.487381 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:31:59 crc kubenswrapper[4812]: I0216 13:31:59.487494 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:31:59 crc kubenswrapper[4812]: I0216 13:31:59.487524 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:31:59 crc kubenswrapper[4812]: I0216 13:31:59.835872 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 02:10:58.765925446 +0000 UTC Feb 16 13:31:59 crc kubenswrapper[4812]: I0216 13:31:59.982847 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:31:59 crc kubenswrapper[4812]: I0216 13:31:59.984150 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:31:59 crc kubenswrapper[4812]: I0216 13:31:59.984196 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:31:59 crc kubenswrapper[4812]: I0216 13:31:59.984214 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:00 crc kubenswrapper[4812]: I0216 13:32:00.401933 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 13:32:00 crc kubenswrapper[4812]: I0216 13:32:00.836838 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 14:33:41.186376619 +0000 UTC Feb 16 13:32:00 crc kubenswrapper[4812]: I0216 13:32:00.985560 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:32:00 crc kubenswrapper[4812]: I0216 13:32:00.986894 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:00 crc kubenswrapper[4812]: I0216 13:32:00.986975 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:00 crc kubenswrapper[4812]: I0216 13:32:00.986999 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:01 crc kubenswrapper[4812]: I0216 13:32:01.647972 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 16 13:32:01 crc kubenswrapper[4812]: I0216 13:32:01.648210 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:32:01 crc kubenswrapper[4812]: I0216 13:32:01.650168 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:01 crc kubenswrapper[4812]: I0216 13:32:01.650231 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:01 crc kubenswrapper[4812]: I0216 13:32:01.650253 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:01 crc kubenswrapper[4812]: I0216 13:32:01.837849 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 16:57:59.355336433 +0000 UTC Feb 16 13:32:01 crc kubenswrapper[4812]: E0216 13:32:01.983749 4812 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 13:32:02 crc kubenswrapper[4812]: I0216 13:32:02.838152 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 00:40:35.433556981 +0000 UTC Feb 16 13:32:03 crc kubenswrapper[4812]: I0216 13:32:03.838568 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 23:26:56.445604464 +0000 UTC Feb 16 13:32:04 crc kubenswrapper[4812]: I0216 13:32:04.839290 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 17:59:03.693358333 +0000 UTC Feb 16 13:32:05 crc kubenswrapper[4812]: I0216 13:32:05.840060 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 10:17:22.315581508 +0000 UTC Feb 16 13:32:06 crc kubenswrapper[4812]: I0216 13:32:06.819973 4812 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 16 13:32:06 crc kubenswrapper[4812]: I0216 13:32:06.841170 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 17:23:53.565377858 +0000 UTC Feb 16 13:32:06 crc kubenswrapper[4812]: I0216 13:32:06.916070 4812 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 16 13:32:06 crc kubenswrapper[4812]: I0216 13:32:06.916129 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 16 13:32:06 crc kubenswrapper[4812]: I0216 13:32:06.921813 4812 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 16 13:32:06 crc kubenswrapper[4812]: I0216 13:32:06.921874 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 16 13:32:07 crc kubenswrapper[4812]: I0216 13:32:07.841763 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 15:16:49.382471303 +0000 UTC Feb 16 13:32:07 crc kubenswrapper[4812]: I0216 13:32:07.871238 4812 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 13:32:07 crc kubenswrapper[4812]: I0216 13:32:07.871315 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 13:32:07 crc kubenswrapper[4812]: I0216 13:32:07.995514 4812 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 16 13:32:07 crc kubenswrapper[4812]: [+]log ok Feb 16 13:32:07 crc kubenswrapper[4812]: [+]etcd ok Feb 16 13:32:07 crc kubenswrapper[4812]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 16 13:32:07 crc kubenswrapper[4812]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 16 13:32:07 crc kubenswrapper[4812]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 16 13:32:07 crc kubenswrapper[4812]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 16 13:32:07 crc kubenswrapper[4812]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 16 13:32:07 crc kubenswrapper[4812]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 16 13:32:07 crc kubenswrapper[4812]: [+]poststarthook/generic-apiserver-start-informers ok Feb 16 13:32:07 crc kubenswrapper[4812]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 16 13:32:07 crc kubenswrapper[4812]: [+]poststarthook/priority-and-fairness-filter ok Feb 16 13:32:07 crc kubenswrapper[4812]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 16 13:32:07 crc kubenswrapper[4812]: [+]poststarthook/start-apiextensions-informers ok Feb 16 13:32:07 crc kubenswrapper[4812]: [+]poststarthook/start-apiextensions-controllers ok Feb 16 13:32:07 crc kubenswrapper[4812]: [+]poststarthook/crd-informer-synced ok Feb 16 13:32:07 crc kubenswrapper[4812]: [+]poststarthook/start-system-namespaces-controller ok Feb 16 13:32:07 crc kubenswrapper[4812]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 16 13:32:07 crc kubenswrapper[4812]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 16 13:32:07 crc kubenswrapper[4812]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 16 13:32:07 crc kubenswrapper[4812]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 16 13:32:07 crc kubenswrapper[4812]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 16 13:32:07 crc kubenswrapper[4812]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 16 13:32:07 crc kubenswrapper[4812]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Feb 16 13:32:07 crc kubenswrapper[4812]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 16 13:32:07 crc kubenswrapper[4812]: [+]poststarthook/bootstrap-controller ok Feb 16 13:32:07 crc kubenswrapper[4812]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 16 13:32:07 crc kubenswrapper[4812]: [+]poststarthook/start-kube-aggregator-informers ok Feb 16 13:32:07 crc kubenswrapper[4812]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 16 13:32:07 crc kubenswrapper[4812]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 16 13:32:07 crc kubenswrapper[4812]: [+]poststarthook/apiservice-registration-controller ok Feb 16 13:32:07 crc kubenswrapper[4812]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 16 13:32:07 crc kubenswrapper[4812]: [+]poststarthook/apiservice-discovery-controller ok Feb 16 13:32:07 crc kubenswrapper[4812]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 16 13:32:07 crc kubenswrapper[4812]: [+]autoregister-completion ok Feb 16 13:32:07 crc kubenswrapper[4812]: [+]poststarthook/apiservice-openapi-controller ok Feb 16 13:32:07 crc kubenswrapper[4812]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 16 13:32:07 crc kubenswrapper[4812]: livez check failed Feb 16 13:32:07 crc kubenswrapper[4812]: I0216 13:32:07.995576 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 13:32:08 crc kubenswrapper[4812]: I0216 13:32:08.843070 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 23:08:00.710895179 +0000 UTC Feb 16 13:32:09 crc kubenswrapper[4812]: I0216 13:32:09.005480 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 13:32:09 crc kubenswrapper[4812]: I0216 13:32:09.005637 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:32:09 crc kubenswrapper[4812]: I0216 13:32:09.006601 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:09 crc kubenswrapper[4812]: I0216 13:32:09.006647 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:09 crc kubenswrapper[4812]: I0216 13:32:09.006659 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:09 crc kubenswrapper[4812]: I0216 13:32:09.513510 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 16 13:32:09 crc kubenswrapper[4812]: I0216 13:32:09.513699 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:32:09 crc kubenswrapper[4812]: I0216 13:32:09.514663 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:09 crc kubenswrapper[4812]: I0216 13:32:09.514692 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:09 crc kubenswrapper[4812]: I0216 13:32:09.514702 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:09 crc kubenswrapper[4812]: I0216 13:32:09.529831 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 16 13:32:09 crc kubenswrapper[4812]: I0216 13:32:09.844098 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 19:36:35.435421376 +0000 UTC Feb 16 13:32:10 crc kubenswrapper[4812]: I0216 13:32:10.006685 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:32:10 crc kubenswrapper[4812]: I0216 13:32:10.007722 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:10 crc kubenswrapper[4812]: I0216 13:32:10.007757 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:10 crc kubenswrapper[4812]: I0216 13:32:10.007769 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:10 crc kubenswrapper[4812]: I0216 13:32:10.844804 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 23:39:55.169154594 +0000 UTC Feb 16 13:32:11 crc kubenswrapper[4812]: I0216 13:32:11.845574 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 16:37:10.299627662 +0000 UTC Feb 16 13:32:11 crc kubenswrapper[4812]: E0216 13:32:11.921405 4812 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 16 13:32:11 crc kubenswrapper[4812]: I0216 13:32:11.924261 4812 trace.go:236] Trace[1040351466]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 13:31:59.962) (total time: 11961ms): Feb 16 13:32:11 crc kubenswrapper[4812]: Trace[1040351466]: ---"Objects listed" error: 11961ms (13:32:11.924) Feb 16 13:32:11 crc kubenswrapper[4812]: Trace[1040351466]: [11.961250915s] [11.961250915s] END Feb 16 13:32:11 crc kubenswrapper[4812]: I0216 13:32:11.924336 4812 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 13:32:11 crc kubenswrapper[4812]: I0216 13:32:11.924508 4812 trace.go:236] Trace[2022442899]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 13:32:00.580) (total time: 11343ms): Feb 16 13:32:11 crc kubenswrapper[4812]: Trace[2022442899]: ---"Objects listed" error: 11343ms (13:32:11.924) Feb 16 13:32:11 crc kubenswrapper[4812]: Trace[2022442899]: [11.3437135s] [11.3437135s] END Feb 16 13:32:11 crc kubenswrapper[4812]: I0216 13:32:11.924549 4812 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 13:32:11 crc kubenswrapper[4812]: E0216 13:32:11.927938 4812 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 16 13:32:11 crc kubenswrapper[4812]: I0216 13:32:11.929656 4812 trace.go:236] Trace[1100351624]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 13:32:01.835) (total time: 10093ms): Feb 16 13:32:11 crc kubenswrapper[4812]: Trace[1100351624]: ---"Objects listed" error: 10093ms (13:32:11.929) Feb 16 13:32:11 crc kubenswrapper[4812]: Trace[1100351624]: [10.093931823s] [10.093931823s] END Feb 16 13:32:11 crc kubenswrapper[4812]: I0216 13:32:11.929687 4812 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 13:32:11 crc kubenswrapper[4812]: I0216 13:32:11.929657 4812 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 16 13:32:11 crc kubenswrapper[4812]: I0216 13:32:11.933327 4812 trace.go:236] Trace[1939515004]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 13:32:01.928) (total time: 10004ms): Feb 16 13:32:11 crc kubenswrapper[4812]: Trace[1939515004]: ---"Objects listed" error: 10004ms (13:32:11.933) Feb 16 13:32:11 crc kubenswrapper[4812]: Trace[1939515004]: [10.004518338s] [10.004518338s] END Feb 16 13:32:11 crc kubenswrapper[4812]: I0216 13:32:11.933376 4812 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 13:32:11 crc kubenswrapper[4812]: I0216 13:32:11.934382 4812 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.819064 4812 apiserver.go:52] "Watching apiserver" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.827839 4812 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.828185 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.829083 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:32:12 crc kubenswrapper[4812]: E0216 13:32:12.829209 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.829284 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.829313 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.829383 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 13:32:12 crc kubenswrapper[4812]: E0216 13:32:12.829384 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.829495 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.830122 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:32:12 crc kubenswrapper[4812]: E0216 13:32:12.830211 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.832167 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.832688 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.832728 4812 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.833941 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.834230 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.834278 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.834329 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.834376 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.834404 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.834411 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.834685 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.834772 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.834775 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.834825 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.834776 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.834805 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.834976 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.835014 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.835043 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.835074 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.835090 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.835102 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.835120 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.835133 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.835160 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.835189 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.835211 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.835236 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.835265 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.835294 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.835317 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.835341 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.835371 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.835380 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.835385 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.835400 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.835483 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.835520 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.835547 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.835526 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.835579 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.835608 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.835618 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.835697 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.835738 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.835778 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.835811 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.835832 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.835842 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.835873 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.835905 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.835944 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.836117 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.836150 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.836181 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.836212 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.836239 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.836272 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.836298 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.836326 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.836354 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.836379 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.836403 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.836431 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.836491 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.836521 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.836553 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.836582 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.836615 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.836650 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.836677 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.836703 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.836730 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.836756 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.836787 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.836812 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.836842 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.836874 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.836897 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.836923 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.837149 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.837178 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.837202 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.837228 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.837255 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.837285 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.837315 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.837344 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.837378 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.837408 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.837458 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.837490 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.837519 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.837544 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.837574 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.837604 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.837631 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.837657 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.837682 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.837707 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.837731 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.837759 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.837783 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.837810 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.837833 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.837858 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.837882 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.837907 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.837932 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.837956 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.837980 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.838005 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.838027 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.838052 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.838077 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.838103 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.838128 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.838157 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.838183 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.838210 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.838251 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.838274 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.838299 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.838322 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.838349 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.838373 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.838399 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.838425 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.838699 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.838735 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.838761 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.838789 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.838817 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.838846 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.838872 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.838898 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.838931 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.838959 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.838986 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.839019 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.839045 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.839072 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.839101 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.839128 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.839152 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.839179 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.839207 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.839231 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.839257 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.839293 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.839318 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.835902 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.835956 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.836121 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.836186 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.836229 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.836395 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.836402 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.836432 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.836432 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.836511 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.835314 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.836619 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.836688 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.836706 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.836808 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.836827 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.836923 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.837185 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.837224 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.837360 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.837424 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.838471 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.838664 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.838703 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.838795 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.838890 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.839159 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.839095 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.839236 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: E0216 13:32:12.839364 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:32:13.339319091 +0000 UTC m=+22.403649842 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.841616 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.839635 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.841653 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.841681 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.841703 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.841725 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.841742 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.841759 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.841777 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.841800 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.841816 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.841833 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.841877 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.841896 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.841914 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.841935 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.841950 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.841991 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.842009 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.842023 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.842043 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.842059 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.842075 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.842091 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.842109 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.842889 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.842910 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.842954 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.842985 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.843032 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.843064 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.843090 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.843138 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.843161 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.843207 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.843231 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.843279 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.843304 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.843326 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.843371 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.843397 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.843456 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.843485 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.843545 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.843574 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.843628 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.843656 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.843709 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.843737 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.843788 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.843813 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.843863 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.843895 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.843944 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.843970 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.843992 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.844012 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.844030 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.844046 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.844065 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.844083 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.844133 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.844151 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.844200 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.844228 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.844268 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.844286 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.844305 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.844326 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.844345 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.844363 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.844384 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.844401 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.844421 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.844464 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.844483 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.844500 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.844571 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.844587 4812 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.844604 4812 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.844616 4812 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.844630 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.844641 4812 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.844651 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.844662 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.844672 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.844683 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.844693 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.839687 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.839842 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.840184 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.840322 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.840522 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.840526 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.840907 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.840930 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.845408 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.845718 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.846029 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.846132 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.846281 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.846545 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.846978 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.847185 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.847619 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.848111 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.848510 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.848534 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.848633 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.848761 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.849027 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.849349 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.849489 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.849633 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.849710 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.849841 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.849954 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.850348 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.850439 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.850680 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.850691 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.841589 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.850796 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.841623 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.842049 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.842306 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.842401 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.842495 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.850877 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.842498 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.842629 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.842839 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.842958 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.843689 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.843713 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.844225 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.844620 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.844780 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.844909 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.844994 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.845092 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.845110 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.845133 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.851376 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.851886 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.852180 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.852397 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.852359 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 20:55:23.770318839 +0000 UTC Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.852473 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.851702 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.852711 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.852756 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.852980 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.853003 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.853095 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.853248 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.853290 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.854163 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.854662 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.855162 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.855215 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.855492 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.855602 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.855640 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.855683 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.855686 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.856000 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.856174 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.856262 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.856315 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.856705 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.856776 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.856893 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.858116 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.858196 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.858257 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.858341 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.858390 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.858469 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.858811 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.858936 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.858952 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.858999 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.860272 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.860338 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.860539 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.860864 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.861051 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.861169 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.861228 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.861821 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.861896 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.862007 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.862093 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.862346 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.862136 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.862302 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.862435 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.862476 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.862764 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.862995 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.863211 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.863244 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.863284 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.863525 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.864435 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.864680 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.864852 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.865066 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: E0216 13:32:12.865329 4812 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.865358 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.865386 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: E0216 13:32:12.865462 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 13:32:13.365407504 +0000 UTC m=+22.429738395 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.866436 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.866510 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.866657 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.866734 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: E0216 13:32:12.866804 4812 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.866842 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: E0216 13:32:12.866903 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 13:32:13.366877725 +0000 UTC m=+22.431208446 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.867048 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.867051 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.867115 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.867146 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.867640 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.867834 4812 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.867902 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.868113 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.868162 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.868346 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.868522 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.868598 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.869263 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.869560 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.869584 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.869602 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.869965 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.870014 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.841213 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.872924 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.876677 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.876055 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.882580 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.882787 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: E0216 13:32:12.882886 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.882908 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: E0216 13:32:12.882920 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 13:32:12 crc kubenswrapper[4812]: E0216 13:32:12.882954 4812 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 13:32:12 crc kubenswrapper[4812]: E0216 13:32:12.883026 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 13:32:13.383004974 +0000 UTC m=+22.447335685 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.883013 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.883301 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.883730 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.887871 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 13:32:12 crc kubenswrapper[4812]: E0216 13:32:12.888643 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 13:32:12 crc kubenswrapper[4812]: E0216 13:32:12.888676 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 13:32:12 crc kubenswrapper[4812]: E0216 13:32:12.888689 4812 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 13:32:12 crc kubenswrapper[4812]: E0216 13:32:12.888739 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 13:32:13.388721667 +0000 UTC m=+22.453052368 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.894777 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.913547 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.913756 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.916258 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.917243 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.921244 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.931701 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.936419 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.942179 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.946627 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.946729 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.946774 4812 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.946784 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.946794 4812 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.946802 4812 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.946811 4812 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.946819 4812 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.946828 4812 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.946836 4812 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.946844 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.946853 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.946860 4812 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.946868 4812 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.946877 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.946885 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.946893 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.946901 4812 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.946909 4812 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.946917 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.946925 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.946933 4812 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.946941 4812 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.946949 4812 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.946957 4812 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.946967 4812 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.946975 4812 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.946983 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.946994 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947003 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947015 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947026 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947037 4812 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947048 4812 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947062 4812 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947076 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947089 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947101 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947112 4812 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947123 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947137 4812 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947149 4812 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947161 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947173 4812 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947187 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947197 4812 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947207 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947215 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947223 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947232 4812 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947241 4812 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947250 4812 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947259 4812 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947268 4812 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947277 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947288 4812 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947296 4812 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947304 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947313 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947322 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947330 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947339 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947347 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947355 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947363 4812 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947371 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947379 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947387 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947394 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947401 4812 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947409 4812 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947417 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947425 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947433 4812 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947458 4812 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947468 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947479 4812 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947488 4812 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947498 4812 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947507 4812 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947515 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947523 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947532 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947540 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947549 4812 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947557 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947565 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947575 4812 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947583 4812 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947591 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947599 4812 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947610 4812 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947618 4812 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947625 4812 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947633 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947643 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947651 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947658 4812 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947666 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947673 4812 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947689 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947698 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947705 4812 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947714 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947722 4812 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947729 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947738 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947746 4812 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947753 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947761 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947769 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947777 4812 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947784 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947792 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947800 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947809 4812 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947816 4812 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947824 4812 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947832 4812 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947840 4812 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947848 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947856 4812 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947864 4812 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947872 4812 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947880 4812 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947888 4812 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947896 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947903 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947911 4812 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947920 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947929 4812 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947937 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947946 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947955 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947963 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947971 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947979 4812 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947987 4812 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.947996 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.948004 4812 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.948012 4812 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.948022 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.948030 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.948038 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.948048 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.948058 4812 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.948069 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.948080 4812 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.948090 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.948100 4812 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.948110 4812 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.948119 4812 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.948129 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.948139 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.948149 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.948158 4812 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.948168 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.948178 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.948187 4812 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.948197 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.948206 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.948216 4812 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.948236 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.948246 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.948257 4812 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.948267 4812 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.948278 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.948290 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.948301 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.948311 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.948320 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.948330 4812 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.948339 4812 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.948349 4812 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.948358 4812 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.948396 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.948408 4812 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.948418 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.948428 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.948438 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.948465 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.948517 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.949302 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.951491 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.952340 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.954632 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.960368 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.970010 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.974338 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.986129 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.995751 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.995773 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.996411 4812 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 16 13:32:12 crc kubenswrapper[4812]: I0216 13:32:12.996482 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.006385 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.006701 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.014256 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.014759 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.016423 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.016783 4812 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be" exitCode=255 Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.016822 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be"} Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.016870 4812 scope.go:117] "RemoveContainer" containerID="1ac4778b23c98ccc871567dac911aae65499dd17212eba145817044f6f6d19c8" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.019420 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.028295 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.038013 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.049415 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.049472 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.049485 4812 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.049494 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.050191 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.060697 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.070950 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.079522 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.090317 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.101140 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.118467 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.134848 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b29081-a34f-4671-85a3-e1bc2b16d37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ac4778b23c98ccc871567dac911aae65499dd17212eba145817044f6f6d19c8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T13:31:56Z\\\",\\\"message\\\":\\\"W0216 13:31:55.444017 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 13:31:55.444912 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771248715 cert, and key in /tmp/serving-cert-899001435/serving-signer.crt, /tmp/serving-cert-899001435/serving-signer.key\\\\nI0216 13:31:55.754702 1 observer_polling.go:159] Starting file observer\\\\nW0216 13:31:55.757988 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 13:31:55.758159 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 13:31:55.760125 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-899001435/tls.crt::/tmp/serving-cert-899001435/tls.key\\\\\\\"\\\\nF0216 13:31:56.160740 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.146361 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.149461 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 13:32:13 crc kubenswrapper[4812]: W0216 13:32:13.163949 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-5536f14c2283b9a2de69bbeb59867b4ecabb9f20f98f2e6ecad8edd377ded20b WatchSource:0}: Error finding container 5536f14c2283b9a2de69bbeb59867b4ecabb9f20f98f2e6ecad8edd377ded20b: Status 404 returned error can't find the container with id 5536f14c2283b9a2de69bbeb59867b4ecabb9f20f98f2e6ecad8edd377ded20b Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.170961 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.215146 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 13:32:13 crc kubenswrapper[4812]: W0216 13:32:13.225913 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-114a31e62f7be5f0d7f84eb941766a669b086429395a6f41b475300c4d39cdc4 WatchSource:0}: Error finding container 114a31e62f7be5f0d7f84eb941766a669b086429395a6f41b475300c4d39cdc4: Status 404 returned error can't find the container with id 114a31e62f7be5f0d7f84eb941766a669b086429395a6f41b475300c4d39cdc4 Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.351092 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:32:13 crc kubenswrapper[4812]: E0216 13:32:13.351381 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:32:14.351334333 +0000 UTC m=+23.415665034 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.443114 4812 csr.go:261] certificate signing request csr-hs2zn is approved, waiting to be issued Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.452469 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.452533 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.452564 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.452596 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:32:13 crc kubenswrapper[4812]: E0216 13:32:13.452694 4812 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 13:32:13 crc kubenswrapper[4812]: E0216 13:32:13.452733 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 13:32:13 crc kubenswrapper[4812]: E0216 13:32:13.452753 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 13:32:13 crc kubenswrapper[4812]: E0216 13:32:13.452766 4812 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 13:32:13 crc kubenswrapper[4812]: E0216 13:32:13.452699 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 13:32:13 crc kubenswrapper[4812]: E0216 13:32:13.452827 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 13:32:13 crc kubenswrapper[4812]: E0216 13:32:13.452861 4812 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 13:32:13 crc kubenswrapper[4812]: E0216 13:32:13.452717 4812 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 13:32:13 crc kubenswrapper[4812]: E0216 13:32:13.452810 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 13:32:14.45278584 +0000 UTC m=+23.517116621 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 13:32:13 crc kubenswrapper[4812]: E0216 13:32:13.452946 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 13:32:14.452928744 +0000 UTC m=+23.517259545 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 13:32:13 crc kubenswrapper[4812]: E0216 13:32:13.452962 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 13:32:14.452954465 +0000 UTC m=+23.517285296 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 13:32:13 crc kubenswrapper[4812]: E0216 13:32:13.452976 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 13:32:14.452968705 +0000 UTC m=+23.517299536 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.460023 4812 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.460132 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.466621 4812 csr.go:257] certificate signing request csr-hs2zn is issued Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.563241 4812 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.563298 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.853496 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 13:10:26.070285759 +0000 UTC Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.878922 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:32:13 crc kubenswrapper[4812]: E0216 13:32:13.879104 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.882760 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.883485 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.884512 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.885383 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.886239 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.886945 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.887757 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.888517 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.889349 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.890112 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.890813 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.894306 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.894983 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.895721 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.897161 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.897955 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.898866 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.899322 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.901766 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.902515 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.903659 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.904471 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.905025 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.906240 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.906663 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.907928 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.908811 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.909871 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.910607 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.911491 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.911955 4812 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.912072 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.914276 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.914958 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.915430 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.917077 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.918213 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.918925 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.919997 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.920792 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.921817 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.922672 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.923903 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.924662 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.925644 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.926162 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.927238 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.928009 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.929016 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.929622 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.930643 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.931319 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.932151 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 16 13:32:13 crc kubenswrapper[4812]: I0216 13:32:13.933196 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.020480 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"bc49bb6ecf7f6197ff2e3e76e7e51bb0842fec86477b85c62d1930170ac91d99"} Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.020534 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"f5739f97637ec497a171ceaafe4da357844a77ac1f8f681f0cd5c40be9e4c462"} Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.021282 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"5536f14c2283b9a2de69bbeb59867b4ecabb9f20f98f2e6ecad8edd377ded20b"} Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.023115 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.025077 4812 scope.go:117] "RemoveContainer" containerID="4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be" Feb 16 13:32:14 crc kubenswrapper[4812]: E0216 13:32:14.025256 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.026318 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"367d830f42471f794953f9aa4cb979d9c3dac3ef0f2364661058c50cc1b0d055"} Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.026351 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5f6091f13d04f859b29636375d3a27e07eeaee338f0da4a70b471f7f26250e8f"} Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.026364 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"114a31e62f7be5f0d7f84eb941766a669b086429395a6f41b475300c4d39cdc4"} Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.033649 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.042219 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.068086 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b29081-a34f-4671-85a3-e1bc2b16d37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ac4778b23c98ccc871567dac911aae65499dd17212eba145817044f6f6d19c8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T13:31:56Z\\\",\\\"message\\\":\\\"W0216 13:31:55.444017 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 13:31:55.444912 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771248715 cert, and key in /tmp/serving-cert-899001435/serving-signer.crt, /tmp/serving-cert-899001435/serving-signer.key\\\\nI0216 13:31:55.754702 1 observer_polling.go:159] Starting file observer\\\\nW0216 13:31:55.757988 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 13:31:55.758159 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 13:31:55.760125 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-899001435/tls.crt::/tmp/serving-cert-899001435/tls.key\\\\\\\"\\\\nF0216 13:31:56.160740 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:14Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.091221 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:14Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.141360 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc49bb6ecf7f6197ff2e3e76e7e51bb0842fec86477b85c62d1930170ac91d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:14Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.162419 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:14Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.176181 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:14Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.202571 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc49bb6ecf7f6197ff2e3e76e7e51bb0842fec86477b85c62d1930170ac91d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:14Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.218982 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-2hhp5"] Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.219281 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-p9b2s"] Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.219470 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.219485 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-p9b2s" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.222332 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.222348 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.222399 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.222407 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.222333 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.222567 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.223108 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.223244 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.224001 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-c6mn9"] Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.224414 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-q8g94"] Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.224526 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.225063 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-q8g94" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.227586 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.227597 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.227682 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.227753 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.227863 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.227955 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.227993 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://367d830f42471f794953f9aa4cb979d9c3dac3ef0f2364661058c50cc1b0d055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f6091f13d04f859b29636375d3a27e07eeaee338f0da4a70b471f7f26250e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:14Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.229036 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.244954 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:14Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.260274 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3c55e49a-a30d-4950-a690-c33d9f8a31e0-mcd-auth-proxy-config\") pod \"machine-config-daemon-c6mn9\" (UID: \"3c55e49a-a30d-4950-a690-c33d9f8a31e0\") " pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.260308 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/934e533e-cc26-4770-af67-3dbcaa0dab5b-system-cni-dir\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.260326 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/934e533e-cc26-4770-af67-3dbcaa0dab5b-multus-cni-dir\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.260339 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3c55e49a-a30d-4950-a690-c33d9f8a31e0-rootfs\") pod \"machine-config-daemon-c6mn9\" (UID: \"3c55e49a-a30d-4950-a690-c33d9f8a31e0\") " pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.260357 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/934e533e-cc26-4770-af67-3dbcaa0dab5b-multus-socket-dir-parent\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.260374 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3c55e49a-a30d-4950-a690-c33d9f8a31e0-proxy-tls\") pod \"machine-config-daemon-c6mn9\" (UID: \"3c55e49a-a30d-4950-a690-c33d9f8a31e0\") " pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.260394 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/934e533e-cc26-4770-af67-3dbcaa0dab5b-host-run-netns\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.260501 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/934e533e-cc26-4770-af67-3dbcaa0dab5b-host-var-lib-cni-bin\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.260549 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/145eec20-9328-4b99-b0ec-4870b6761385-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-q8g94\" (UID: \"145eec20-9328-4b99-b0ec-4870b6761385\") " pod="openshift-multus/multus-additional-cni-plugins-q8g94" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.260571 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/934e533e-cc26-4770-af67-3dbcaa0dab5b-multus-daemon-config\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.260609 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/934e533e-cc26-4770-af67-3dbcaa0dab5b-host-run-multus-certs\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.260640 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/934e533e-cc26-4770-af67-3dbcaa0dab5b-cnibin\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.260660 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/934e533e-cc26-4770-af67-3dbcaa0dab5b-etc-kubernetes\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.260693 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/934e533e-cc26-4770-af67-3dbcaa0dab5b-cni-binary-copy\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.260714 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/145eec20-9328-4b99-b0ec-4870b6761385-os-release\") pod \"multus-additional-cni-plugins-q8g94\" (UID: \"145eec20-9328-4b99-b0ec-4870b6761385\") " pod="openshift-multus/multus-additional-cni-plugins-q8g94" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.260736 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/145eec20-9328-4b99-b0ec-4870b6761385-tuning-conf-dir\") pod \"multus-additional-cni-plugins-q8g94\" (UID: \"145eec20-9328-4b99-b0ec-4870b6761385\") " pod="openshift-multus/multus-additional-cni-plugins-q8g94" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.260761 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2tjt\" (UniqueName: \"kubernetes.io/projected/fcdbcfde-ed95-4587-a92e-c7fa071b1b8f-kube-api-access-f2tjt\") pod \"node-resolver-p9b2s\" (UID: \"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\") " pod="openshift-dns/node-resolver-p9b2s" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.260786 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/145eec20-9328-4b99-b0ec-4870b6761385-cnibin\") pod \"multus-additional-cni-plugins-q8g94\" (UID: \"145eec20-9328-4b99-b0ec-4870b6761385\") " pod="openshift-multus/multus-additional-cni-plugins-q8g94" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.260808 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/934e533e-cc26-4770-af67-3dbcaa0dab5b-host-var-lib-cni-multus\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.260829 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/934e533e-cc26-4770-af67-3dbcaa0dab5b-multus-conf-dir\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.260878 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/145eec20-9328-4b99-b0ec-4870b6761385-system-cni-dir\") pod \"multus-additional-cni-plugins-q8g94\" (UID: \"145eec20-9328-4b99-b0ec-4870b6761385\") " pod="openshift-multus/multus-additional-cni-plugins-q8g94" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.260900 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/934e533e-cc26-4770-af67-3dbcaa0dab5b-host-var-lib-kubelet\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.260928 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/145eec20-9328-4b99-b0ec-4870b6761385-cni-binary-copy\") pod \"multus-additional-cni-plugins-q8g94\" (UID: \"145eec20-9328-4b99-b0ec-4870b6761385\") " pod="openshift-multus/multus-additional-cni-plugins-q8g94" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.260949 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/934e533e-cc26-4770-af67-3dbcaa0dab5b-host-run-k8s-cni-cncf-io\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.260974 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/fcdbcfde-ed95-4587-a92e-c7fa071b1b8f-hosts-file\") pod \"node-resolver-p9b2s\" (UID: \"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\") " pod="openshift-dns/node-resolver-p9b2s" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.261027 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjqx2\" (UniqueName: \"kubernetes.io/projected/3c55e49a-a30d-4950-a690-c33d9f8a31e0-kube-api-access-gjqx2\") pod \"machine-config-daemon-c6mn9\" (UID: \"3c55e49a-a30d-4950-a690-c33d9f8a31e0\") " pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.261065 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4vr9\" (UniqueName: \"kubernetes.io/projected/145eec20-9328-4b99-b0ec-4870b6761385-kube-api-access-w4vr9\") pod \"multus-additional-cni-plugins-q8g94\" (UID: \"145eec20-9328-4b99-b0ec-4870b6761385\") " pod="openshift-multus/multus-additional-cni-plugins-q8g94" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.261085 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/934e533e-cc26-4770-af67-3dbcaa0dab5b-os-release\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.261100 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/934e533e-cc26-4770-af67-3dbcaa0dab5b-hostroot\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.261115 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2xc4\" (UniqueName: \"kubernetes.io/projected/934e533e-cc26-4770-af67-3dbcaa0dab5b-kube-api-access-c2xc4\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.262172 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b29081-a34f-4671-85a3-e1bc2b16d37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 13:32:11.932704 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 13:32:11.933194 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 13:32:11.934094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1719573121/tls.crt::/tmp/serving-cert-1719573121/tls.key\\\\\\\"\\\\nI0216 13:32:12.377062 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 13:32:12.379783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 13:32:12.379801 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 13:32:12.379819 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 13:32:12.379825 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 13:32:12.383118 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 13:32:12.383163 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 13:32:12.383168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 13:32:12.383182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 13:32:12.383185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 13:32:12.383188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 13:32:12.384888 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:14Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.271633 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:14Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.282663 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:14Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.294885 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:14Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.310205 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc49bb6ecf7f6197ff2e3e76e7e51bb0842fec86477b85c62d1930170ac91d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:14Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.323057 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"145eec20-9328-4b99-b0ec-4870b6761385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8g94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:14Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.334662 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:14Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.346204 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:14Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.356029 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p9b2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2tjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p9b2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:14Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.361818 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.361921 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3c55e49a-a30d-4950-a690-c33d9f8a31e0-mcd-auth-proxy-config\") pod \"machine-config-daemon-c6mn9\" (UID: \"3c55e49a-a30d-4950-a690-c33d9f8a31e0\") " pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.361950 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/934e533e-cc26-4770-af67-3dbcaa0dab5b-system-cni-dir\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.361968 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/934e533e-cc26-4770-af67-3dbcaa0dab5b-multus-cni-dir\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: E0216 13:32:14.362026 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:32:16.361997006 +0000 UTC m=+25.426327707 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.362092 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3c55e49a-a30d-4950-a690-c33d9f8a31e0-rootfs\") pod \"machine-config-daemon-c6mn9\" (UID: \"3c55e49a-a30d-4950-a690-c33d9f8a31e0\") " pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.362122 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/934e533e-cc26-4770-af67-3dbcaa0dab5b-system-cni-dir\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.362125 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/934e533e-cc26-4770-af67-3dbcaa0dab5b-multus-socket-dir-parent\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.362182 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3c55e49a-a30d-4950-a690-c33d9f8a31e0-proxy-tls\") pod \"machine-config-daemon-c6mn9\" (UID: \"3c55e49a-a30d-4950-a690-c33d9f8a31e0\") " pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.362191 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/934e533e-cc26-4770-af67-3dbcaa0dab5b-multus-cni-dir\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.362209 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/934e533e-cc26-4770-af67-3dbcaa0dab5b-host-run-netns\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.362234 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/934e533e-cc26-4770-af67-3dbcaa0dab5b-host-var-lib-cni-bin\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.362233 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3c55e49a-a30d-4950-a690-c33d9f8a31e0-rootfs\") pod \"machine-config-daemon-c6mn9\" (UID: \"3c55e49a-a30d-4950-a690-c33d9f8a31e0\") " pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.362257 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/145eec20-9328-4b99-b0ec-4870b6761385-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-q8g94\" (UID: \"145eec20-9328-4b99-b0ec-4870b6761385\") " pod="openshift-multus/multus-additional-cni-plugins-q8g94" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.362284 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/934e533e-cc26-4770-af67-3dbcaa0dab5b-multus-daemon-config\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.362289 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/934e533e-cc26-4770-af67-3dbcaa0dab5b-multus-socket-dir-parent\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.362310 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/934e533e-cc26-4770-af67-3dbcaa0dab5b-host-var-lib-cni-bin\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.362311 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/934e533e-cc26-4770-af67-3dbcaa0dab5b-host-run-netns\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.362321 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/934e533e-cc26-4770-af67-3dbcaa0dab5b-host-run-multus-certs\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.362349 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/934e533e-cc26-4770-af67-3dbcaa0dab5b-host-run-multus-certs\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.362392 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/934e533e-cc26-4770-af67-3dbcaa0dab5b-cnibin\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.362416 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/934e533e-cc26-4770-af67-3dbcaa0dab5b-etc-kubernetes\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.362477 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/934e533e-cc26-4770-af67-3dbcaa0dab5b-cni-binary-copy\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.362502 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/934e533e-cc26-4770-af67-3dbcaa0dab5b-etc-kubernetes\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.362513 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/145eec20-9328-4b99-b0ec-4870b6761385-os-release\") pod \"multus-additional-cni-plugins-q8g94\" (UID: \"145eec20-9328-4b99-b0ec-4870b6761385\") " pod="openshift-multus/multus-additional-cni-plugins-q8g94" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.362502 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/934e533e-cc26-4770-af67-3dbcaa0dab5b-cnibin\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.362537 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/145eec20-9328-4b99-b0ec-4870b6761385-tuning-conf-dir\") pod \"multus-additional-cni-plugins-q8g94\" (UID: \"145eec20-9328-4b99-b0ec-4870b6761385\") " pod="openshift-multus/multus-additional-cni-plugins-q8g94" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.362561 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2tjt\" (UniqueName: \"kubernetes.io/projected/fcdbcfde-ed95-4587-a92e-c7fa071b1b8f-kube-api-access-f2tjt\") pod \"node-resolver-p9b2s\" (UID: \"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\") " pod="openshift-dns/node-resolver-p9b2s" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.362583 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/145eec20-9328-4b99-b0ec-4870b6761385-cnibin\") pod \"multus-additional-cni-plugins-q8g94\" (UID: \"145eec20-9328-4b99-b0ec-4870b6761385\") " pod="openshift-multus/multus-additional-cni-plugins-q8g94" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.362602 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/934e533e-cc26-4770-af67-3dbcaa0dab5b-host-var-lib-cni-multus\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.362678 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/145eec20-9328-4b99-b0ec-4870b6761385-cnibin\") pod \"multus-additional-cni-plugins-q8g94\" (UID: \"145eec20-9328-4b99-b0ec-4870b6761385\") " pod="openshift-multus/multus-additional-cni-plugins-q8g94" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.362728 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/934e533e-cc26-4770-af67-3dbcaa0dab5b-host-var-lib-cni-multus\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.362745 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/145eec20-9328-4b99-b0ec-4870b6761385-os-release\") pod \"multus-additional-cni-plugins-q8g94\" (UID: \"145eec20-9328-4b99-b0ec-4870b6761385\") " pod="openshift-multus/multus-additional-cni-plugins-q8g94" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.362758 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/934e533e-cc26-4770-af67-3dbcaa0dab5b-multus-conf-dir\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.362813 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/145eec20-9328-4b99-b0ec-4870b6761385-system-cni-dir\") pod \"multus-additional-cni-plugins-q8g94\" (UID: \"145eec20-9328-4b99-b0ec-4870b6761385\") " pod="openshift-multus/multus-additional-cni-plugins-q8g94" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.362822 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/934e533e-cc26-4770-af67-3dbcaa0dab5b-multus-conf-dir\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.362838 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/934e533e-cc26-4770-af67-3dbcaa0dab5b-host-var-lib-kubelet\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.362860 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/145eec20-9328-4b99-b0ec-4870b6761385-system-cni-dir\") pod \"multus-additional-cni-plugins-q8g94\" (UID: \"145eec20-9328-4b99-b0ec-4870b6761385\") " pod="openshift-multus/multus-additional-cni-plugins-q8g94" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.362887 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/145eec20-9328-4b99-b0ec-4870b6761385-cni-binary-copy\") pod \"multus-additional-cni-plugins-q8g94\" (UID: \"145eec20-9328-4b99-b0ec-4870b6761385\") " pod="openshift-multus/multus-additional-cni-plugins-q8g94" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.362910 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/934e533e-cc26-4770-af67-3dbcaa0dab5b-host-run-k8s-cni-cncf-io\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.362960 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/934e533e-cc26-4770-af67-3dbcaa0dab5b-host-var-lib-kubelet\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.363025 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/934e533e-cc26-4770-af67-3dbcaa0dab5b-host-run-k8s-cni-cncf-io\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.363035 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/934e533e-cc26-4770-af67-3dbcaa0dab5b-cni-binary-copy\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.363133 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/934e533e-cc26-4770-af67-3dbcaa0dab5b-multus-daemon-config\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.363170 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/145eec20-9328-4b99-b0ec-4870b6761385-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-q8g94\" (UID: \"145eec20-9328-4b99-b0ec-4870b6761385\") " pod="openshift-multus/multus-additional-cni-plugins-q8g94" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.363180 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/145eec20-9328-4b99-b0ec-4870b6761385-tuning-conf-dir\") pod \"multus-additional-cni-plugins-q8g94\" (UID: \"145eec20-9328-4b99-b0ec-4870b6761385\") " pod="openshift-multus/multus-additional-cni-plugins-q8g94" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.363244 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3c55e49a-a30d-4950-a690-c33d9f8a31e0-mcd-auth-proxy-config\") pod \"machine-config-daemon-c6mn9\" (UID: \"3c55e49a-a30d-4950-a690-c33d9f8a31e0\") " pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.363473 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/145eec20-9328-4b99-b0ec-4870b6761385-cni-binary-copy\") pod \"multus-additional-cni-plugins-q8g94\" (UID: \"145eec20-9328-4b99-b0ec-4870b6761385\") " pod="openshift-multus/multus-additional-cni-plugins-q8g94" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.363537 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/fcdbcfde-ed95-4587-a92e-c7fa071b1b8f-hosts-file\") pod \"node-resolver-p9b2s\" (UID: \"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\") " pod="openshift-dns/node-resolver-p9b2s" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.363563 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjqx2\" (UniqueName: \"kubernetes.io/projected/3c55e49a-a30d-4950-a690-c33d9f8a31e0-kube-api-access-gjqx2\") pod \"machine-config-daemon-c6mn9\" (UID: \"3c55e49a-a30d-4950-a690-c33d9f8a31e0\") " pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.363621 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/fcdbcfde-ed95-4587-a92e-c7fa071b1b8f-hosts-file\") pod \"node-resolver-p9b2s\" (UID: \"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\") " pod="openshift-dns/node-resolver-p9b2s" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.363660 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4vr9\" (UniqueName: \"kubernetes.io/projected/145eec20-9328-4b99-b0ec-4870b6761385-kube-api-access-w4vr9\") pod \"multus-additional-cni-plugins-q8g94\" (UID: \"145eec20-9328-4b99-b0ec-4870b6761385\") " pod="openshift-multus/multus-additional-cni-plugins-q8g94" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.363682 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/934e533e-cc26-4770-af67-3dbcaa0dab5b-os-release\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.364146 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/934e533e-cc26-4770-af67-3dbcaa0dab5b-hostroot\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.364190 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/934e533e-cc26-4770-af67-3dbcaa0dab5b-hostroot\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.364191 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2xc4\" (UniqueName: \"kubernetes.io/projected/934e533e-cc26-4770-af67-3dbcaa0dab5b-kube-api-access-c2xc4\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.364169 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/934e533e-cc26-4770-af67-3dbcaa0dab5b-os-release\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.371105 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3c55e49a-a30d-4950-a690-c33d9f8a31e0-proxy-tls\") pod \"machine-config-daemon-c6mn9\" (UID: \"3c55e49a-a30d-4950-a690-c33d9f8a31e0\") " pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.376826 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b29081-a34f-4671-85a3-e1bc2b16d37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 13:32:11.932704 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 13:32:11.933194 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 13:32:11.934094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1719573121/tls.crt::/tmp/serving-cert-1719573121/tls.key\\\\\\\"\\\\nI0216 13:32:12.377062 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 13:32:12.379783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 13:32:12.379801 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 13:32:12.379819 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 13:32:12.379825 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 13:32:12.383118 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 13:32:12.383163 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 13:32:12.383168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 13:32:12.383182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 13:32:12.383185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 13:32:12.383188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 13:32:12.384888 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:14Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.377961 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjqx2\" (UniqueName: \"kubernetes.io/projected/3c55e49a-a30d-4950-a690-c33d9f8a31e0-kube-api-access-gjqx2\") pod \"machine-config-daemon-c6mn9\" (UID: \"3c55e49a-a30d-4950-a690-c33d9f8a31e0\") " pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.378812 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4vr9\" (UniqueName: \"kubernetes.io/projected/145eec20-9328-4b99-b0ec-4870b6761385-kube-api-access-w4vr9\") pod \"multus-additional-cni-plugins-q8g94\" (UID: \"145eec20-9328-4b99-b0ec-4870b6761385\") " pod="openshift-multus/multus-additional-cni-plugins-q8g94" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.379318 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2xc4\" (UniqueName: \"kubernetes.io/projected/934e533e-cc26-4770-af67-3dbcaa0dab5b-kube-api-access-c2xc4\") pod \"multus-2hhp5\" (UID: \"934e533e-cc26-4770-af67-3dbcaa0dab5b\") " pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.385650 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2tjt\" (UniqueName: \"kubernetes.io/projected/fcdbcfde-ed95-4587-a92e-c7fa071b1b8f-kube-api-access-f2tjt\") pod \"node-resolver-p9b2s\" (UID: \"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\") " pod="openshift-dns/node-resolver-p9b2s" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.389931 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:14Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.403762 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://367d830f42471f794953f9aa4cb979d9c3dac3ef0f2364661058c50cc1b0d055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f6091f13d04f859b29636375d3a27e07eeaee338f0da4a70b471f7f26250e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:14Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.416995 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:14Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.428697 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2hhp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"934e533e-cc26-4770-af67-3dbcaa0dab5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2xc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2hhp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:14Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.441365 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c55e49a-a30d-4950-a690-c33d9f8a31e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6mn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:14Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.464843 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.464882 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.464903 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.464923 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:32:14 crc kubenswrapper[4812]: E0216 13:32:14.465029 4812 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 13:32:14 crc kubenswrapper[4812]: E0216 13:32:14.465052 4812 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 13:32:14 crc kubenswrapper[4812]: E0216 13:32:14.465070 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 13:32:14 crc kubenswrapper[4812]: E0216 13:32:14.465092 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 13:32:14 crc kubenswrapper[4812]: E0216 13:32:14.465102 4812 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 13:32:14 crc kubenswrapper[4812]: E0216 13:32:14.465124 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 13:32:16.46510195 +0000 UTC m=+25.529432731 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 13:32:14 crc kubenswrapper[4812]: E0216 13:32:14.465056 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 13:32:14 crc kubenswrapper[4812]: E0216 13:32:14.465143 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 13:32:16.465135291 +0000 UTC m=+25.529465992 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 13:32:14 crc kubenswrapper[4812]: E0216 13:32:14.465147 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 13:32:14 crc kubenswrapper[4812]: E0216 13:32:14.465158 4812 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 13:32:14 crc kubenswrapper[4812]: E0216 13:32:14.465163 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 13:32:16.465154602 +0000 UTC m=+25.529485413 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 13:32:14 crc kubenswrapper[4812]: E0216 13:32:14.465188 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 13:32:16.465175022 +0000 UTC m=+25.529505723 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.467732 4812 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-16 13:27:13 +0000 UTC, rotation deadline is 2026-11-13 04:18:36.406392439 +0000 UTC Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.467779 4812 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6470h46m21.938616143s for next certificate rotation Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.531708 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-p9b2s" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.536824 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-2hhp5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.542503 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.547854 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-q8g94" Feb 16 13:32:14 crc kubenswrapper[4812]: W0216 13:32:14.555551 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcdbcfde_ed95_4587_a92e_c7fa071b1b8f.slice/crio-cbd089f4ce792f247bd25b6b4e2158cb2d35f2dda013ac9b1c436e9541cf413f WatchSource:0}: Error finding container cbd089f4ce792f247bd25b6b4e2158cb2d35f2dda013ac9b1c436e9541cf413f: Status 404 returned error can't find the container with id cbd089f4ce792f247bd25b6b4e2158cb2d35f2dda013ac9b1c436e9541cf413f Feb 16 13:32:14 crc kubenswrapper[4812]: W0216 13:32:14.559853 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod934e533e_cc26_4770_af67_3dbcaa0dab5b.slice/crio-aa34c13c91d6bd2164ceb963b8ed60820ab5524742eecfb2db67f3f9d30c2a65 WatchSource:0}: Error finding container aa34c13c91d6bd2164ceb963b8ed60820ab5524742eecfb2db67f3f9d30c2a65: Status 404 returned error can't find the container with id aa34c13c91d6bd2164ceb963b8ed60820ab5524742eecfb2db67f3f9d30c2a65 Feb 16 13:32:14 crc kubenswrapper[4812]: W0216 13:32:14.570628 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c55e49a_a30d_4950_a690_c33d9f8a31e0.slice/crio-7470a4212664ca049a3940c1e4a1985d31ee2f362e923b759da1fb72ac019876 WatchSource:0}: Error finding container 7470a4212664ca049a3940c1e4a1985d31ee2f362e923b759da1fb72ac019876: Status 404 returned error can't find the container with id 7470a4212664ca049a3940c1e4a1985d31ee2f362e923b759da1fb72ac019876 Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.600480 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pzksg"] Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.601229 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.603148 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.603574 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.603817 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.603939 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.605543 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.605798 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.605938 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.617035 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc49bb6ecf7f6197ff2e3e76e7e51bb0842fec86477b85c62d1930170ac91d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:14Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.634383 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"145eec20-9328-4b99-b0ec-4870b6761385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8g94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:14Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.655084 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:14Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.667372 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a67ca714-af04-4a76-8a28-54d47f66b1fa-ovnkube-script-lib\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.667609 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-host-slash\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.667704 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-run-openvswitch\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.667796 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-host-cni-bin\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.667867 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a67ca714-af04-4a76-8a28-54d47f66b1fa-ovnkube-config\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.667957 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-systemd-units\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.668046 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.668116 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg2hw\" (UniqueName: \"kubernetes.io/projected/a67ca714-af04-4a76-8a28-54d47f66b1fa-kube-api-access-tg2hw\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.668199 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a67ca714-af04-4a76-8a28-54d47f66b1fa-env-overrides\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.668283 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a67ca714-af04-4a76-8a28-54d47f66b1fa-ovn-node-metrics-cert\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.668364 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-host-cni-netd\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.668484 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-var-lib-openvswitch\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.668585 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-log-socket\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.668689 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-host-run-ovn-kubernetes\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.668798 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-run-ovn\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.668900 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-node-log\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.669026 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-run-systemd\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.669116 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-etc-openvswitch\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.669202 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-host-run-netns\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.669289 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-host-kubelet\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.670835 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:14Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.688176 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://367d830f42471f794953f9aa4cb979d9c3dac3ef0f2364661058c50cc1b0d055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f6091f13d04f859b29636375d3a27e07eeaee338f0da4a70b471f7f26250e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:14Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.703896 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:14Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.719713 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p9b2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2tjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p9b2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:14Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.742196 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67ca714-af04-4a76-8a28-54d47f66b1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzksg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:14Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.756428 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b29081-a34f-4671-85a3-e1bc2b16d37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 13:32:11.932704 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 13:32:11.933194 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 13:32:11.934094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1719573121/tls.crt::/tmp/serving-cert-1719573121/tls.key\\\\\\\"\\\\nI0216 13:32:12.377062 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 13:32:12.379783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 13:32:12.379801 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 13:32:12.379819 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 13:32:12.379825 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 13:32:12.383118 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 13:32:12.383163 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 13:32:12.383168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 13:32:12.383182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 13:32:12.383185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 13:32:12.383188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 13:32:12.384888 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:14Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.770268 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c55e49a-a30d-4950-a690-c33d9f8a31e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6mn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:14Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.770410 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-run-openvswitch\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.770468 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-host-cni-bin\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.770490 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a67ca714-af04-4a76-8a28-54d47f66b1fa-ovnkube-config\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.770511 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-systemd-units\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.770532 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.770556 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg2hw\" (UniqueName: \"kubernetes.io/projected/a67ca714-af04-4a76-8a28-54d47f66b1fa-kube-api-access-tg2hw\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.770577 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-host-cni-netd\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.770597 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a67ca714-af04-4a76-8a28-54d47f66b1fa-env-overrides\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.770616 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a67ca714-af04-4a76-8a28-54d47f66b1fa-ovn-node-metrics-cert\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.770636 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-var-lib-openvswitch\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.770659 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-log-socket\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.770723 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-host-run-ovn-kubernetes\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.770746 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-run-ovn\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.770779 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-node-log\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.770804 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-host-run-netns\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.770825 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-run-systemd\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.770847 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-etc-openvswitch\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.770869 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-host-kubelet\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.770892 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-host-slash\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.770912 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a67ca714-af04-4a76-8a28-54d47f66b1fa-ovnkube-script-lib\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.771050 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-var-lib-openvswitch\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.771126 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-run-openvswitch\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.771158 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-host-cni-bin\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.771759 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-host-run-netns\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.771792 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-host-run-ovn-kubernetes\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.771829 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-run-ovn\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.771849 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-log-socket\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.771865 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-node-log\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.771876 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-etc-openvswitch\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.771881 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a67ca714-af04-4a76-8a28-54d47f66b1fa-ovnkube-config\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.771890 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-run-systemd\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.771878 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a67ca714-af04-4a76-8a28-54d47f66b1fa-ovnkube-script-lib\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.771910 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-host-kubelet\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.771916 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-host-slash\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.771974 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-systemd-units\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.772002 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-host-cni-netd\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.772028 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.772319 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a67ca714-af04-4a76-8a28-54d47f66b1fa-env-overrides\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.776733 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a67ca714-af04-4a76-8a28-54d47f66b1fa-ovn-node-metrics-cert\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.784844 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:14Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.791558 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg2hw\" (UniqueName: \"kubernetes.io/projected/a67ca714-af04-4a76-8a28-54d47f66b1fa-kube-api-access-tg2hw\") pod \"ovnkube-node-pzksg\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.797395 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2hhp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"934e533e-cc26-4770-af67-3dbcaa0dab5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2xc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2hhp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:14Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.854589 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 13:52:20.213711099 +0000 UTC Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.874056 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.877475 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.877841 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:32:14 crc kubenswrapper[4812]: E0216 13:32:14.877959 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.877963 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:32:14 crc kubenswrapper[4812]: E0216 13:32:14.878031 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.884046 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.887329 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:14Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.898959 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://367d830f42471f794953f9aa4cb979d9c3dac3ef0f2364661058c50cc1b0d055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f6091f13d04f859b29636375d3a27e07eeaee338f0da4a70b471f7f26250e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:14Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.913388 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:14Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.922880 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.941093 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p9b2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2tjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p9b2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:14Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:14 crc kubenswrapper[4812]: W0216 13:32:14.943430 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda67ca714_af04_4a76_8a28_54d47f66b1fa.slice/crio-58b335f78768348993a51c94c1f0eda0952cebb019a44c6c0880f865550b4a2d WatchSource:0}: Error finding container 58b335f78768348993a51c94c1f0eda0952cebb019a44c6c0880f865550b4a2d: Status 404 returned error can't find the container with id 58b335f78768348993a51c94c1f0eda0952cebb019a44c6c0880f865550b4a2d Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.969894 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67ca714-af04-4a76-8a28-54d47f66b1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzksg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:14Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:14 crc kubenswrapper[4812]: I0216 13:32:14.991528 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b29081-a34f-4671-85a3-e1bc2b16d37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 13:32:11.932704 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 13:32:11.933194 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 13:32:11.934094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1719573121/tls.crt::/tmp/serving-cert-1719573121/tls.key\\\\\\\"\\\\nI0216 13:32:12.377062 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 13:32:12.379783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 13:32:12.379801 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 13:32:12.379819 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 13:32:12.379825 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 13:32:12.383118 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 13:32:12.383163 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 13:32:12.383168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 13:32:12.383182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 13:32:12.383185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 13:32:12.383188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 13:32:12.384888 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:14Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:15 crc kubenswrapper[4812]: I0216 13:32:15.019840 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c55e49a-a30d-4950-a690-c33d9f8a31e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6mn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:15Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:15 crc kubenswrapper[4812]: I0216 13:32:15.030117 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-p9b2s" event={"ID":"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f","Type":"ContainerStarted","Data":"9a86c16e138e436ef3976c554ab3d70e39a7515f662e305558513a633db53142"} Feb 16 13:32:15 crc kubenswrapper[4812]: I0216 13:32:15.030170 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-p9b2s" event={"ID":"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f","Type":"ContainerStarted","Data":"cbd089f4ce792f247bd25b6b4e2158cb2d35f2dda013ac9b1c436e9541cf413f"} Feb 16 13:32:15 crc kubenswrapper[4812]: I0216 13:32:15.031247 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2hhp5" event={"ID":"934e533e-cc26-4770-af67-3dbcaa0dab5b","Type":"ContainerStarted","Data":"8a9a856d7be88b316dbac827e3998da5df55019101f11f2e6e7d763a78c4a80b"} Feb 16 13:32:15 crc kubenswrapper[4812]: I0216 13:32:15.031280 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2hhp5" event={"ID":"934e533e-cc26-4770-af67-3dbcaa0dab5b","Type":"ContainerStarted","Data":"aa34c13c91d6bd2164ceb963b8ed60820ab5524742eecfb2db67f3f9d30c2a65"} Feb 16 13:32:15 crc kubenswrapper[4812]: I0216 13:32:15.032153 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" event={"ID":"a67ca714-af04-4a76-8a28-54d47f66b1fa","Type":"ContainerStarted","Data":"58b335f78768348993a51c94c1f0eda0952cebb019a44c6c0880f865550b4a2d"} Feb 16 13:32:15 crc kubenswrapper[4812]: I0216 13:32:15.033856 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" event={"ID":"3c55e49a-a30d-4950-a690-c33d9f8a31e0","Type":"ContainerStarted","Data":"03ff320daab8e42aacfbe2cb2ef91f064cfd950840978ca9aaf3e11ea987dc74"} Feb 16 13:32:15 crc kubenswrapper[4812]: I0216 13:32:15.033912 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" event={"ID":"3c55e49a-a30d-4950-a690-c33d9f8a31e0","Type":"ContainerStarted","Data":"0fe6225f5f96d52b832cfbd8ecb30748dfcd053b9d95a512bfbe0c6c4aa7c0e6"} Feb 16 13:32:15 crc kubenswrapper[4812]: I0216 13:32:15.033927 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" event={"ID":"3c55e49a-a30d-4950-a690-c33d9f8a31e0","Type":"ContainerStarted","Data":"7470a4212664ca049a3940c1e4a1985d31ee2f362e923b759da1fb72ac019876"} Feb 16 13:32:15 crc kubenswrapper[4812]: I0216 13:32:15.035126 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" event={"ID":"145eec20-9328-4b99-b0ec-4870b6761385","Type":"ContainerStarted","Data":"50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651"} Feb 16 13:32:15 crc kubenswrapper[4812]: I0216 13:32:15.035150 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" event={"ID":"145eec20-9328-4b99-b0ec-4870b6761385","Type":"ContainerStarted","Data":"3360a172ecfb345236aaf27a2dffc058a9d0d8a40146c769eaf2677649596018"} Feb 16 13:32:15 crc kubenswrapper[4812]: E0216 13:32:15.044695 4812 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 13:32:15 crc kubenswrapper[4812]: I0216 13:32:15.055808 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:15Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:15 crc kubenswrapper[4812]: I0216 13:32:15.086655 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2hhp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"934e533e-cc26-4770-af67-3dbcaa0dab5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2xc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2hhp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:15Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:15 crc kubenswrapper[4812]: I0216 13:32:15.109014 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc49bb6ecf7f6197ff2e3e76e7e51bb0842fec86477b85c62d1930170ac91d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:15Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:15 crc kubenswrapper[4812]: I0216 13:32:15.124480 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"145eec20-9328-4b99-b0ec-4870b6761385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8g94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:15Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:15 crc kubenswrapper[4812]: I0216 13:32:15.138539 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:15Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:15 crc kubenswrapper[4812]: I0216 13:32:15.152714 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:15Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:15 crc kubenswrapper[4812]: I0216 13:32:15.169249 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b29081-a34f-4671-85a3-e1bc2b16d37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 13:32:11.932704 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 13:32:11.933194 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 13:32:11.934094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1719573121/tls.crt::/tmp/serving-cert-1719573121/tls.key\\\\\\\"\\\\nI0216 13:32:12.377062 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 13:32:12.379783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 13:32:12.379801 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 13:32:12.379819 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 13:32:12.379825 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 13:32:12.383118 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 13:32:12.383163 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 13:32:12.383168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 13:32:12.383182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 13:32:12.383185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 13:32:12.383188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 13:32:12.384888 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:15Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:15 crc kubenswrapper[4812]: I0216 13:32:15.180682 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:15Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:15 crc kubenswrapper[4812]: I0216 13:32:15.195377 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://367d830f42471f794953f9aa4cb979d9c3dac3ef0f2364661058c50cc1b0d055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f6091f13d04f859b29636375d3a27e07eeaee338f0da4a70b471f7f26250e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:15Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:15 crc kubenswrapper[4812]: I0216 13:32:15.206882 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:15Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:15 crc kubenswrapper[4812]: I0216 13:32:15.219353 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p9b2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a86c16e138e436ef3976c554ab3d70e39a7515f662e305558513a633db53142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2tjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p9b2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:15Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:15 crc kubenswrapper[4812]: I0216 13:32:15.236967 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67ca714-af04-4a76-8a28-54d47f66b1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzksg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:15Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:15 crc kubenswrapper[4812]: I0216 13:32:15.251764 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:15Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:15 crc kubenswrapper[4812]: I0216 13:32:15.263685 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2hhp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"934e533e-cc26-4770-af67-3dbcaa0dab5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a9a856d7be88b316dbac827e3998da5df55019101f11f2e6e7d763a78c4a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2xc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2hhp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:15Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:15 crc kubenswrapper[4812]: I0216 13:32:15.279013 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c55e49a-a30d-4950-a690-c33d9f8a31e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff320daab8e42aacfbe2cb2ef91f064cfd950840978ca9aaf3e11ea987dc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe6225f5f96d52b832cfbd8ecb30748dfcd053b9d95a512bfbe0c6c4aa7c0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6mn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:15Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:15 crc kubenswrapper[4812]: I0216 13:32:15.290614 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f43b73-bad3-4883-ad0b-4a8df6824248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1001938f74f193628f1faf5fada6a26836e7053b9f46656912df4c17ac11a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5304f578aa4297d0e9e57669cb571d178fada6972553487f8f3e6a6a2422175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b778bd4c0e3d5d5158269f30b9646da76918d07d5a79e274b0b70ad1b4f38623\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94212f31a24fe0258d7c5241438e6ec1daf60b49bb87365fa851ae0a5628de1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:15Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:15 crc kubenswrapper[4812]: I0216 13:32:15.300910 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc49bb6ecf7f6197ff2e3e76e7e51bb0842fec86477b85c62d1930170ac91d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:15Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:15 crc kubenswrapper[4812]: I0216 13:32:15.313190 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"145eec20-9328-4b99-b0ec-4870b6761385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8g94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:15Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:15 crc kubenswrapper[4812]: I0216 13:32:15.855492 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 05:51:04.275332712 +0000 UTC Feb 16 13:32:15 crc kubenswrapper[4812]: I0216 13:32:15.878157 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:32:15 crc kubenswrapper[4812]: E0216 13:32:15.878339 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.038558 4812 generic.go:334] "Generic (PLEG): container finished" podID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerID="7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657" exitCode=0 Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.038639 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" event={"ID":"a67ca714-af04-4a76-8a28-54d47f66b1fa","Type":"ContainerDied","Data":"7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657"} Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.040360 4812 generic.go:334] "Generic (PLEG): container finished" podID="145eec20-9328-4b99-b0ec-4870b6761385" containerID="50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651" exitCode=0 Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.040396 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" event={"ID":"145eec20-9328-4b99-b0ec-4870b6761385","Type":"ContainerDied","Data":"50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651"} Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.048106 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"78ca0949b586a8839b02c0e01bd92571f3c4eaa156cfcd58546607c30cdbdcf3"} Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.053053 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p9b2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a86c16e138e436ef3976c554ab3d70e39a7515f662e305558513a633db53142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2tjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p9b2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:16Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.076635 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67ca714-af04-4a76-8a28-54d47f66b1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzksg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:16Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.093251 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b29081-a34f-4671-85a3-e1bc2b16d37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 13:32:11.932704 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 13:32:11.933194 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 13:32:11.934094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1719573121/tls.crt::/tmp/serving-cert-1719573121/tls.key\\\\\\\"\\\\nI0216 13:32:12.377062 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 13:32:12.379783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 13:32:12.379801 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 13:32:12.379819 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 13:32:12.379825 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 13:32:12.383118 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 13:32:12.383163 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 13:32:12.383168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 13:32:12.383182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 13:32:12.383185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 13:32:12.383188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 13:32:12.384888 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:16Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.107176 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:16Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.123543 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://367d830f42471f794953f9aa4cb979d9c3dac3ef0f2364661058c50cc1b0d055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f6091f13d04f859b29636375d3a27e07eeaee338f0da4a70b471f7f26250e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:16Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.139829 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:16Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.152687 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:16Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.167430 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2hhp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"934e533e-cc26-4770-af67-3dbcaa0dab5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a9a856d7be88b316dbac827e3998da5df55019101f11f2e6e7d763a78c4a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2xc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2hhp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:16Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.180691 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c55e49a-a30d-4950-a690-c33d9f8a31e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff320daab8e42aacfbe2cb2ef91f064cfd950840978ca9aaf3e11ea987dc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe6225f5f96d52b832cfbd8ecb30748dfcd053b9d95a512bfbe0c6c4aa7c0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6mn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:16Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.193102 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f43b73-bad3-4883-ad0b-4a8df6824248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1001938f74f193628f1faf5fada6a26836e7053b9f46656912df4c17ac11a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5304f578aa4297d0e9e57669cb571d178fada6972553487f8f3e6a6a2422175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b778bd4c0e3d5d5158269f30b9646da76918d07d5a79e274b0b70ad1b4f38623\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94212f31a24fe0258d7c5241438e6ec1daf60b49bb87365fa851ae0a5628de1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:16Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.208243 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc49bb6ecf7f6197ff2e3e76e7e51bb0842fec86477b85c62d1930170ac91d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:16Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.223301 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"145eec20-9328-4b99-b0ec-4870b6761385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8g94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:16Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.235527 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:16Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.246942 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:16Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.259626 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2hhp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"934e533e-cc26-4770-af67-3dbcaa0dab5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a9a856d7be88b316dbac827e3998da5df55019101f11f2e6e7d763a78c4a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2xc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2hhp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:16Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.269694 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c55e49a-a30d-4950-a690-c33d9f8a31e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff320daab8e42aacfbe2cb2ef91f064cfd950840978ca9aaf3e11ea987dc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe6225f5f96d52b832cfbd8ecb30748dfcd053b9d95a512bfbe0c6c4aa7c0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6mn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:16Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.280625 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f43b73-bad3-4883-ad0b-4a8df6824248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1001938f74f193628f1faf5fada6a26836e7053b9f46656912df4c17ac11a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5304f578aa4297d0e9e57669cb571d178fada6972553487f8f3e6a6a2422175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b778bd4c0e3d5d5158269f30b9646da76918d07d5a79e274b0b70ad1b4f38623\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94212f31a24fe0258d7c5241438e6ec1daf60b49bb87365fa851ae0a5628de1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:16Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.297308 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc49bb6ecf7f6197ff2e3e76e7e51bb0842fec86477b85c62d1930170ac91d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:16Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.316615 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"145eec20-9328-4b99-b0ec-4870b6761385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8g94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:16Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.330249 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:16Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.344244 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b29081-a34f-4671-85a3-e1bc2b16d37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 13:32:11.932704 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 13:32:11.933194 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 13:32:11.934094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1719573121/tls.crt::/tmp/serving-cert-1719573121/tls.key\\\\\\\"\\\\nI0216 13:32:12.377062 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 13:32:12.379783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 13:32:12.379801 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 13:32:12.379819 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 13:32:12.379825 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 13:32:12.383118 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 13:32:12.383163 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 13:32:12.383168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 13:32:12.383182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 13:32:12.383185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 13:32:12.383188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 13:32:12.384888 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:16Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.356976 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78ca0949b586a8839b02c0e01bd92571f3c4eaa156cfcd58546607c30cdbdcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:16Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.370792 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://367d830f42471f794953f9aa4cb979d9c3dac3ef0f2364661058c50cc1b0d055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f6091f13d04f859b29636375d3a27e07eeaee338f0da4a70b471f7f26250e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:16Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.383348 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:16Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.389718 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:32:16 crc kubenswrapper[4812]: E0216 13:32:16.389916 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:32:20.389893447 +0000 UTC m=+29.454224148 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.392810 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p9b2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a86c16e138e436ef3976c554ab3d70e39a7515f662e305558513a633db53142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2tjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p9b2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:16Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.416991 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67ca714-af04-4a76-8a28-54d47f66b1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzksg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:16Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.491150 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.491305 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:32:16 crc kubenswrapper[4812]: E0216 13:32:16.491307 4812 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 13:32:16 crc kubenswrapper[4812]: E0216 13:32:16.491461 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 13:32:20.491402026 +0000 UTC m=+29.555732727 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.491482 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.491632 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:32:16 crc kubenswrapper[4812]: E0216 13:32:16.491742 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 13:32:16 crc kubenswrapper[4812]: E0216 13:32:16.491764 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 13:32:16 crc kubenswrapper[4812]: E0216 13:32:16.491776 4812 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 13:32:16 crc kubenswrapper[4812]: E0216 13:32:16.491815 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 13:32:20.491803408 +0000 UTC m=+29.556134109 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 13:32:16 crc kubenswrapper[4812]: E0216 13:32:16.491957 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 13:32:16 crc kubenswrapper[4812]: E0216 13:32:16.491979 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 13:32:16 crc kubenswrapper[4812]: E0216 13:32:16.492100 4812 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 13:32:16 crc kubenswrapper[4812]: E0216 13:32:16.492226 4812 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 13:32:16 crc kubenswrapper[4812]: E0216 13:32:16.492267 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 13:32:20.492144227 +0000 UTC m=+29.556474988 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 13:32:16 crc kubenswrapper[4812]: E0216 13:32:16.492484 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 13:32:20.492470577 +0000 UTC m=+29.556801288 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.799725 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-5w4kf"] Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.800439 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-5w4kf" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.802211 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.802280 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.802401 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.802746 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.814915 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f43b73-bad3-4883-ad0b-4a8df6824248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1001938f74f193628f1faf5fada6a26836e7053b9f46656912df4c17ac11a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5304f578aa4297d0e9e57669cb571d178fada6972553487f8f3e6a6a2422175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b778bd4c0e3d5d5158269f30b9646da76918d07d5a79e274b0b70ad1b4f38623\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94212f31a24fe0258d7c5241438e6ec1daf60b49bb87365fa851ae0a5628de1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:16Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.829100 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc49bb6ecf7f6197ff2e3e76e7e51bb0842fec86477b85c62d1930170ac91d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:16Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.848851 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"145eec20-9328-4b99-b0ec-4870b6761385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8g94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:16Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.856041 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 05:08:48.917380683 +0000 UTC Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.877580 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:16Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.877875 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.877940 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:32:16 crc kubenswrapper[4812]: E0216 13:32:16.877986 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:32:16 crc kubenswrapper[4812]: E0216 13:32:16.878107 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.891100 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5w4kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f07f8fe-99f2-4f2e-b9f8-56841d756064\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5w4kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:16Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.896357 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvg5w\" (UniqueName: \"kubernetes.io/projected/9f07f8fe-99f2-4f2e-b9f8-56841d756064-kube-api-access-hvg5w\") pod \"node-ca-5w4kf\" (UID: \"9f07f8fe-99f2-4f2e-b9f8-56841d756064\") " pod="openshift-image-registry/node-ca-5w4kf" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.896403 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9f07f8fe-99f2-4f2e-b9f8-56841d756064-serviceca\") pod \"node-ca-5w4kf\" (UID: \"9f07f8fe-99f2-4f2e-b9f8-56841d756064\") " pod="openshift-image-registry/node-ca-5w4kf" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.896428 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9f07f8fe-99f2-4f2e-b9f8-56841d756064-host\") pod \"node-ca-5w4kf\" (UID: \"9f07f8fe-99f2-4f2e-b9f8-56841d756064\") " pod="openshift-image-registry/node-ca-5w4kf" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.903264 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p9b2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a86c16e138e436ef3976c554ab3d70e39a7515f662e305558513a633db53142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2tjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p9b2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:16Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.919970 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67ca714-af04-4a76-8a28-54d47f66b1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzksg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:16Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.932031 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b29081-a34f-4671-85a3-e1bc2b16d37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 13:32:11.932704 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 13:32:11.933194 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 13:32:11.934094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1719573121/tls.crt::/tmp/serving-cert-1719573121/tls.key\\\\\\\"\\\\nI0216 13:32:12.377062 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 13:32:12.379783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 13:32:12.379801 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 13:32:12.379819 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 13:32:12.379825 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 13:32:12.383118 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 13:32:12.383163 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 13:32:12.383168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 13:32:12.383182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 13:32:12.383185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 13:32:12.383188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 13:32:12.384888 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:16Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.942233 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78ca0949b586a8839b02c0e01bd92571f3c4eaa156cfcd58546607c30cdbdcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:16Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.953995 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://367d830f42471f794953f9aa4cb979d9c3dac3ef0f2364661058c50cc1b0d055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f6091f13d04f859b29636375d3a27e07eeaee338f0da4a70b471f7f26250e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:16Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.965895 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:16Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.975646 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:16Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.986869 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2hhp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"934e533e-cc26-4770-af67-3dbcaa0dab5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a9a856d7be88b316dbac827e3998da5df55019101f11f2e6e7d763a78c4a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2xc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2hhp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:16Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.997304 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9f07f8fe-99f2-4f2e-b9f8-56841d756064-host\") pod \"node-ca-5w4kf\" (UID: \"9f07f8fe-99f2-4f2e-b9f8-56841d756064\") " pod="openshift-image-registry/node-ca-5w4kf" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.997407 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvg5w\" (UniqueName: \"kubernetes.io/projected/9f07f8fe-99f2-4f2e-b9f8-56841d756064-kube-api-access-hvg5w\") pod \"node-ca-5w4kf\" (UID: \"9f07f8fe-99f2-4f2e-b9f8-56841d756064\") " pod="openshift-image-registry/node-ca-5w4kf" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.997417 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9f07f8fe-99f2-4f2e-b9f8-56841d756064-host\") pod \"node-ca-5w4kf\" (UID: \"9f07f8fe-99f2-4f2e-b9f8-56841d756064\") " pod="openshift-image-registry/node-ca-5w4kf" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.997458 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9f07f8fe-99f2-4f2e-b9f8-56841d756064-serviceca\") pod \"node-ca-5w4kf\" (UID: \"9f07f8fe-99f2-4f2e-b9f8-56841d756064\") " pod="openshift-image-registry/node-ca-5w4kf" Feb 16 13:32:16 crc kubenswrapper[4812]: I0216 13:32:16.998869 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9f07f8fe-99f2-4f2e-b9f8-56841d756064-serviceca\") pod \"node-ca-5w4kf\" (UID: \"9f07f8fe-99f2-4f2e-b9f8-56841d756064\") " pod="openshift-image-registry/node-ca-5w4kf" Feb 16 13:32:17 crc kubenswrapper[4812]: I0216 13:32:17.000321 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c55e49a-a30d-4950-a690-c33d9f8a31e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff320daab8e42aacfbe2cb2ef91f064cfd950840978ca9aaf3e11ea987dc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe6225f5f96d52b832cfbd8ecb30748dfcd053b9d95a512bfbe0c6c4aa7c0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6mn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:16Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:17 crc kubenswrapper[4812]: I0216 13:32:17.014744 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvg5w\" (UniqueName: \"kubernetes.io/projected/9f07f8fe-99f2-4f2e-b9f8-56841d756064-kube-api-access-hvg5w\") pod \"node-ca-5w4kf\" (UID: \"9f07f8fe-99f2-4f2e-b9f8-56841d756064\") " pod="openshift-image-registry/node-ca-5w4kf" Feb 16 13:32:17 crc kubenswrapper[4812]: I0216 13:32:17.056810 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" event={"ID":"a67ca714-af04-4a76-8a28-54d47f66b1fa","Type":"ContainerStarted","Data":"83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5"} Feb 16 13:32:17 crc kubenswrapper[4812]: I0216 13:32:17.056884 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" event={"ID":"a67ca714-af04-4a76-8a28-54d47f66b1fa","Type":"ContainerStarted","Data":"c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056"} Feb 16 13:32:17 crc kubenswrapper[4812]: I0216 13:32:17.056898 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" event={"ID":"a67ca714-af04-4a76-8a28-54d47f66b1fa","Type":"ContainerStarted","Data":"a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822"} Feb 16 13:32:17 crc kubenswrapper[4812]: I0216 13:32:17.056909 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" event={"ID":"a67ca714-af04-4a76-8a28-54d47f66b1fa","Type":"ContainerStarted","Data":"04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38"} Feb 16 13:32:17 crc kubenswrapper[4812]: I0216 13:32:17.056944 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" event={"ID":"a67ca714-af04-4a76-8a28-54d47f66b1fa","Type":"ContainerStarted","Data":"880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b"} Feb 16 13:32:17 crc kubenswrapper[4812]: I0216 13:32:17.056955 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" event={"ID":"a67ca714-af04-4a76-8a28-54d47f66b1fa","Type":"ContainerStarted","Data":"73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a"} Feb 16 13:32:17 crc kubenswrapper[4812]: I0216 13:32:17.059001 4812 generic.go:334] "Generic (PLEG): container finished" podID="145eec20-9328-4b99-b0ec-4870b6761385" containerID="a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0" exitCode=0 Feb 16 13:32:17 crc kubenswrapper[4812]: I0216 13:32:17.059078 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" event={"ID":"145eec20-9328-4b99-b0ec-4870b6761385","Type":"ContainerDied","Data":"a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0"} Feb 16 13:32:17 crc kubenswrapper[4812]: I0216 13:32:17.079011 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:17Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:17 crc kubenswrapper[4812]: I0216 13:32:17.091314 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5w4kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f07f8fe-99f2-4f2e-b9f8-56841d756064\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5w4kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:17Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:17 crc kubenswrapper[4812]: I0216 13:32:17.106227 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b29081-a34f-4671-85a3-e1bc2b16d37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 13:32:11.932704 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 13:32:11.933194 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 13:32:11.934094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1719573121/tls.crt::/tmp/serving-cert-1719573121/tls.key\\\\\\\"\\\\nI0216 13:32:12.377062 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 13:32:12.379783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 13:32:12.379801 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 13:32:12.379819 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 13:32:12.379825 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 13:32:12.383118 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 13:32:12.383163 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 13:32:12.383168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 13:32:12.383182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 13:32:12.383185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 13:32:12.383188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 13:32:12.384888 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:17Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:17 crc kubenswrapper[4812]: I0216 13:32:17.113008 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-5w4kf" Feb 16 13:32:17 crc kubenswrapper[4812]: I0216 13:32:17.120797 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78ca0949b586a8839b02c0e01bd92571f3c4eaa156cfcd58546607c30cdbdcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:17Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:17 crc kubenswrapper[4812]: I0216 13:32:17.136840 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://367d830f42471f794953f9aa4cb979d9c3dac3ef0f2364661058c50cc1b0d055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f6091f13d04f859b29636375d3a27e07eeaee338f0da4a70b471f7f26250e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:17Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:17 crc kubenswrapper[4812]: I0216 13:32:17.150745 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:17Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:17 crc kubenswrapper[4812]: I0216 13:32:17.160653 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p9b2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a86c16e138e436ef3976c554ab3d70e39a7515f662e305558513a633db53142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2tjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p9b2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:17Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:17 crc kubenswrapper[4812]: I0216 13:32:17.180717 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67ca714-af04-4a76-8a28-54d47f66b1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzksg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:17Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:17 crc kubenswrapper[4812]: I0216 13:32:17.205673 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2hhp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"934e533e-cc26-4770-af67-3dbcaa0dab5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a9a856d7be88b316dbac827e3998da5df55019101f11f2e6e7d763a78c4a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2xc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2hhp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:17Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:17 crc kubenswrapper[4812]: I0216 13:32:17.216914 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c55e49a-a30d-4950-a690-c33d9f8a31e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff320daab8e42aacfbe2cb2ef91f064cfd950840978ca9aaf3e11ea987dc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe6225f5f96d52b832cfbd8ecb30748dfcd053b9d95a512bfbe0c6c4aa7c0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6mn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:17Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:17 crc kubenswrapper[4812]: I0216 13:32:17.231824 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:17Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:17 crc kubenswrapper[4812]: I0216 13:32:17.271110 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc49bb6ecf7f6197ff2e3e76e7e51bb0842fec86477b85c62d1930170ac91d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:17Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:17 crc kubenswrapper[4812]: I0216 13:32:17.314221 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"145eec20-9328-4b99-b0ec-4870b6761385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8g94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:17Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:17 crc kubenswrapper[4812]: I0216 13:32:17.352058 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f43b73-bad3-4883-ad0b-4a8df6824248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1001938f74f193628f1faf5fada6a26836e7053b9f46656912df4c17ac11a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5304f578aa4297d0e9e57669cb571d178fada6972553487f8f3e6a6a2422175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b778bd4c0e3d5d5158269f30b9646da76918d07d5a79e274b0b70ad1b4f38623\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94212f31a24fe0258d7c5241438e6ec1daf60b49bb87365fa851ae0a5628de1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:17Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:17 crc kubenswrapper[4812]: I0216 13:32:17.856699 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 07:54:31.438796439 +0000 UTC Feb 16 13:32:17 crc kubenswrapper[4812]: I0216 13:32:17.878341 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:32:17 crc kubenswrapper[4812]: E0216 13:32:17.878493 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.065468 4812 generic.go:334] "Generic (PLEG): container finished" podID="145eec20-9328-4b99-b0ec-4870b6761385" containerID="82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee" exitCode=0 Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.065536 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" event={"ID":"145eec20-9328-4b99-b0ec-4870b6761385","Type":"ContainerDied","Data":"82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee"} Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.070533 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-5w4kf" event={"ID":"9f07f8fe-99f2-4f2e-b9f8-56841d756064","Type":"ContainerStarted","Data":"6394bf9fe0ec8408dd8bfb0105dcc686f3a820850e75ce47db441f04d22c1c27"} Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.070579 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-5w4kf" event={"ID":"9f07f8fe-99f2-4f2e-b9f8-56841d756064","Type":"ContainerStarted","Data":"8d99969b67886f4def99ae9e14e94dfc05fee9ede68f8d7cffd51c824941aabd"} Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.079861 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f43b73-bad3-4883-ad0b-4a8df6824248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1001938f74f193628f1faf5fada6a26836e7053b9f46656912df4c17ac11a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5304f578aa4297d0e9e57669cb571d178fada6972553487f8f3e6a6a2422175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b778bd4c0e3d5d5158269f30b9646da76918d07d5a79e274b0b70ad1b4f38623\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94212f31a24fe0258d7c5241438e6ec1daf60b49bb87365fa851ae0a5628de1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:18Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.103623 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc49bb6ecf7f6197ff2e3e76e7e51bb0842fec86477b85c62d1930170ac91d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:18Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.120695 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"145eec20-9328-4b99-b0ec-4870b6761385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8g94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:18Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.133852 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:18Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.145115 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5w4kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f07f8fe-99f2-4f2e-b9f8-56841d756064\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5w4kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:18Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.160246 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:18Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.172673 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p9b2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a86c16e138e436ef3976c554ab3d70e39a7515f662e305558513a633db53142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2tjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p9b2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:18Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.195804 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67ca714-af04-4a76-8a28-54d47f66b1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzksg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:18Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.210891 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b29081-a34f-4671-85a3-e1bc2b16d37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 13:32:11.932704 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 13:32:11.933194 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 13:32:11.934094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1719573121/tls.crt::/tmp/serving-cert-1719573121/tls.key\\\\\\\"\\\\nI0216 13:32:12.377062 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 13:32:12.379783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 13:32:12.379801 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 13:32:12.379819 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 13:32:12.379825 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 13:32:12.383118 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 13:32:12.383163 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 13:32:12.383168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 13:32:12.383182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 13:32:12.383185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 13:32:12.383188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 13:32:12.384888 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:18Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.223524 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78ca0949b586a8839b02c0e01bd92571f3c4eaa156cfcd58546607c30cdbdcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:18Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.238020 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://367d830f42471f794953f9aa4cb979d9c3dac3ef0f2364661058c50cc1b0d055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f6091f13d04f859b29636375d3a27e07eeaee338f0da4a70b471f7f26250e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:18Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.253766 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:18Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.265811 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2hhp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"934e533e-cc26-4770-af67-3dbcaa0dab5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a9a856d7be88b316dbac827e3998da5df55019101f11f2e6e7d763a78c4a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2xc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2hhp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:18Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.276985 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c55e49a-a30d-4950-a690-c33d9f8a31e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff320daab8e42aacfbe2cb2ef91f064cfd950840978ca9aaf3e11ea987dc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe6225f5f96d52b832cfbd8ecb30748dfcd053b9d95a512bfbe0c6c4aa7c0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6mn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:18Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.288215 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:18Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.299626 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5w4kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f07f8fe-99f2-4f2e-b9f8-56841d756064\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6394bf9fe0ec8408dd8bfb0105dcc686f3a820850e75ce47db441f04d22c1c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5w4kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:18Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.314958 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:18Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.328315 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.329171 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p9b2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a86c16e138e436ef3976c554ab3d70e39a7515f662e305558513a633db53142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2tjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p9b2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:18Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.330965 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.331016 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.331028 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.331154 4812 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.337607 4812 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.337979 4812 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.340103 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.340157 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.340169 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.340191 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.340205 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:18Z","lastTransitionTime":"2026-02-16T13:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.347165 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67ca714-af04-4a76-8a28-54d47f66b1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzksg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:18Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:18 crc kubenswrapper[4812]: E0216 13:32:18.353560 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4981c762-995f-430f-ab9d-bca26618d78a\\\",\\\"systemUUID\\\":\\\"a8093dd5-8447-4cfc-ac6f-47d191141ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:18Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.356752 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.356799 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.356811 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.356828 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.356838 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:18Z","lastTransitionTime":"2026-02-16T13:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.360491 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b29081-a34f-4671-85a3-e1bc2b16d37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 13:32:11.932704 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 13:32:11.933194 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 13:32:11.934094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1719573121/tls.crt::/tmp/serving-cert-1719573121/tls.key\\\\\\\"\\\\nI0216 13:32:12.377062 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 13:32:12.379783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 13:32:12.379801 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 13:32:12.379819 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 13:32:12.379825 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 13:32:12.383118 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 13:32:12.383163 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 13:32:12.383168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 13:32:12.383182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 13:32:12.383185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 13:32:12.383188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 13:32:12.384888 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:18Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:18 crc kubenswrapper[4812]: E0216 13:32:18.368604 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4981c762-995f-430f-ab9d-bca26618d78a\\\",\\\"systemUUID\\\":\\\"a8093dd5-8447-4cfc-ac6f-47d191141ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:18Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.372207 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.372243 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.372252 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.372274 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.372313 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:18Z","lastTransitionTime":"2026-02-16T13:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.372806 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78ca0949b586a8839b02c0e01bd92571f3c4eaa156cfcd58546607c30cdbdcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:18Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:18 crc kubenswrapper[4812]: E0216 13:32:18.383023 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4981c762-995f-430f-ab9d-bca26618d78a\\\",\\\"systemUUID\\\":\\\"a8093dd5-8447-4cfc-ac6f-47d191141ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:18Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.386260 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://367d830f42471f794953f9aa4cb979d9c3dac3ef0f2364661058c50cc1b0d055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f6091f13d04f859b29636375d3a27e07eeaee338f0da4a70b471f7f26250e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:18Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.386477 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.386494 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.386502 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.386515 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.386523 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:18Z","lastTransitionTime":"2026-02-16T13:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:18 crc kubenswrapper[4812]: E0216 13:32:18.397789 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4981c762-995f-430f-ab9d-bca26618d78a\\\",\\\"systemUUID\\\":\\\"a8093dd5-8447-4cfc-ac6f-47d191141ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:18Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.400410 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:18Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.401622 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.401660 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.401672 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.401702 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.401713 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:18Z","lastTransitionTime":"2026-02-16T13:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:18 crc kubenswrapper[4812]: E0216 13:32:18.413519 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4981c762-995f-430f-ab9d-bca26618d78a\\\",\\\"systemUUID\\\":\\\"a8093dd5-8447-4cfc-ac6f-47d191141ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:18Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:18 crc kubenswrapper[4812]: E0216 13:32:18.413633 4812 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.415230 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.415258 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.415266 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.415282 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.415292 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:18Z","lastTransitionTime":"2026-02-16T13:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.417214 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2hhp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"934e533e-cc26-4770-af67-3dbcaa0dab5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a9a856d7be88b316dbac827e3998da5df55019101f11f2e6e7d763a78c4a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2xc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2hhp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:18Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.431275 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c55e49a-a30d-4950-a690-c33d9f8a31e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff320daab8e42aacfbe2cb2ef91f064cfd950840978ca9aaf3e11ea987dc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe6225f5f96d52b832cfbd8ecb30748dfcd053b9d95a512bfbe0c6c4aa7c0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6mn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:18Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.444022 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f43b73-bad3-4883-ad0b-4a8df6824248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1001938f74f193628f1faf5fada6a26836e7053b9f46656912df4c17ac11a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5304f578aa4297d0e9e57669cb571d178fada6972553487f8f3e6a6a2422175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b778bd4c0e3d5d5158269f30b9646da76918d07d5a79e274b0b70ad1b4f38623\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94212f31a24fe0258d7c5241438e6ec1daf60b49bb87365fa851ae0a5628de1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:18Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.471569 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc49bb6ecf7f6197ff2e3e76e7e51bb0842fec86477b85c62d1930170ac91d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:18Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.516428 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"145eec20-9328-4b99-b0ec-4870b6761385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8g94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:18Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.517649 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.517678 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.517687 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.517705 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.517715 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:18Z","lastTransitionTime":"2026-02-16T13:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.619807 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.619843 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.619856 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.619871 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.619882 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:18Z","lastTransitionTime":"2026-02-16T13:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.722697 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.722741 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.722753 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.722769 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.722781 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:18Z","lastTransitionTime":"2026-02-16T13:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.825644 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.825688 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.825698 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.825720 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.825731 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:18Z","lastTransitionTime":"2026-02-16T13:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.857640 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 14:22:09.748501945 +0000 UTC Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.878575 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.878593 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:32:18 crc kubenswrapper[4812]: E0216 13:32:18.878744 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:32:18 crc kubenswrapper[4812]: E0216 13:32:18.878823 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.928623 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.928651 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.928660 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.928673 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:18 crc kubenswrapper[4812]: I0216 13:32:18.928681 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:18Z","lastTransitionTime":"2026-02-16T13:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.030973 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.031019 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.031029 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.031043 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.031053 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:19Z","lastTransitionTime":"2026-02-16T13:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.082119 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" event={"ID":"a67ca714-af04-4a76-8a28-54d47f66b1fa","Type":"ContainerStarted","Data":"f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d"} Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.085628 4812 generic.go:334] "Generic (PLEG): container finished" podID="145eec20-9328-4b99-b0ec-4870b6761385" containerID="4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75" exitCode=0 Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.085678 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" event={"ID":"145eec20-9328-4b99-b0ec-4870b6761385","Type":"ContainerDied","Data":"4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75"} Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.100785 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:19Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.118475 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p9b2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a86c16e138e436ef3976c554ab3d70e39a7515f662e305558513a633db53142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2tjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p9b2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:19Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.133914 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.133965 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.133976 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.133992 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.134003 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:19Z","lastTransitionTime":"2026-02-16T13:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.138774 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67ca714-af04-4a76-8a28-54d47f66b1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzksg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:19Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.154842 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b29081-a34f-4671-85a3-e1bc2b16d37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 13:32:11.932704 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 13:32:11.933194 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 13:32:11.934094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1719573121/tls.crt::/tmp/serving-cert-1719573121/tls.key\\\\\\\"\\\\nI0216 13:32:12.377062 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 13:32:12.379783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 13:32:12.379801 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 13:32:12.379819 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 13:32:12.379825 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 13:32:12.383118 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 13:32:12.383163 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 13:32:12.383168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 13:32:12.383182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 13:32:12.383185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 13:32:12.383188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 13:32:12.384888 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:19Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.169569 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78ca0949b586a8839b02c0e01bd92571f3c4eaa156cfcd58546607c30cdbdcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:19Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.182070 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://367d830f42471f794953f9aa4cb979d9c3dac3ef0f2364661058c50cc1b0d055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f6091f13d04f859b29636375d3a27e07eeaee338f0da4a70b471f7f26250e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:19Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.194736 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:19Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.208741 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2hhp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"934e533e-cc26-4770-af67-3dbcaa0dab5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a9a856d7be88b316dbac827e3998da5df55019101f11f2e6e7d763a78c4a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2xc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2hhp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:19Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.219224 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c55e49a-a30d-4950-a690-c33d9f8a31e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff320daab8e42aacfbe2cb2ef91f064cfd950840978ca9aaf3e11ea987dc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe6225f5f96d52b832cfbd8ecb30748dfcd053b9d95a512bfbe0c6c4aa7c0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6mn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:19Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.230615 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f43b73-bad3-4883-ad0b-4a8df6824248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1001938f74f193628f1faf5fada6a26836e7053b9f46656912df4c17ac11a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5304f578aa4297d0e9e57669cb571d178fada6972553487f8f3e6a6a2422175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b778bd4c0e3d5d5158269f30b9646da76918d07d5a79e274b0b70ad1b4f38623\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94212f31a24fe0258d7c5241438e6ec1daf60b49bb87365fa851ae0a5628de1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:19Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.237329 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.237374 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.237383 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.237399 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.237408 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:19Z","lastTransitionTime":"2026-02-16T13:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.247071 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc49bb6ecf7f6197ff2e3e76e7e51bb0842fec86477b85c62d1930170ac91d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:19Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.265953 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"145eec20-9328-4b99-b0ec-4870b6761385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8g94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:19Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.281200 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:19Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.293584 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5w4kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f07f8fe-99f2-4f2e-b9f8-56841d756064\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6394bf9fe0ec8408dd8bfb0105dcc686f3a820850e75ce47db441f04d22c1c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5w4kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:19Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.339622 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.339657 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.339667 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.339682 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.339692 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:19Z","lastTransitionTime":"2026-02-16T13:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.442192 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.442225 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.442236 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.442252 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.442262 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:19Z","lastTransitionTime":"2026-02-16T13:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.546213 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.546252 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.546265 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.546280 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.546294 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:19Z","lastTransitionTime":"2026-02-16T13:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.648772 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.648809 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.648819 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.648835 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.648847 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:19Z","lastTransitionTime":"2026-02-16T13:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.751864 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.751926 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.751943 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.751976 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.751995 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:19Z","lastTransitionTime":"2026-02-16T13:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.854840 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.854909 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.854924 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.854962 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.854979 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:19Z","lastTransitionTime":"2026-02-16T13:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.857848 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 07:01:52.412898889 +0000 UTC Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.878366 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:32:19 crc kubenswrapper[4812]: E0216 13:32:19.878604 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.958930 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.958978 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.958991 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.959017 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:19 crc kubenswrapper[4812]: I0216 13:32:19.959033 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:19Z","lastTransitionTime":"2026-02-16T13:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.061574 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.061605 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.061613 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.061626 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.061635 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:20Z","lastTransitionTime":"2026-02-16T13:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.092992 4812 generic.go:334] "Generic (PLEG): container finished" podID="145eec20-9328-4b99-b0ec-4870b6761385" containerID="3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc" exitCode=0 Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.093053 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" event={"ID":"145eec20-9328-4b99-b0ec-4870b6761385","Type":"ContainerDied","Data":"3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc"} Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.115319 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc49bb6ecf7f6197ff2e3e76e7e51bb0842fec86477b85c62d1930170ac91d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:20Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.134159 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"145eec20-9328-4b99-b0ec-4870b6761385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8g94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:20Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.151220 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f43b73-bad3-4883-ad0b-4a8df6824248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1001938f74f193628f1faf5fada6a26836e7053b9f46656912df4c17ac11a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5304f578aa4297d0e9e57669cb571d178fada6972553487f8f3e6a6a2422175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b778bd4c0e3d5d5158269f30b9646da76918d07d5a79e274b0b70ad1b4f38623\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94212f31a24fe0258d7c5241438e6ec1daf60b49bb87365fa851ae0a5628de1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:20Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.163952 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.164026 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.164040 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.164063 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.164076 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:20Z","lastTransitionTime":"2026-02-16T13:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.167923 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:20Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.186132 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5w4kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f07f8fe-99f2-4f2e-b9f8-56841d756064\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6394bf9fe0ec8408dd8bfb0105dcc686f3a820850e75ce47db441f04d22c1c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5w4kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:20Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.202601 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b29081-a34f-4671-85a3-e1bc2b16d37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 13:32:11.932704 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 13:32:11.933194 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 13:32:11.934094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1719573121/tls.crt::/tmp/serving-cert-1719573121/tls.key\\\\\\\"\\\\nI0216 13:32:12.377062 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 13:32:12.379783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 13:32:12.379801 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 13:32:12.379819 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 13:32:12.379825 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 13:32:12.383118 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 13:32:12.383163 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 13:32:12.383168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 13:32:12.383182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 13:32:12.383185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 13:32:12.383188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 13:32:12.384888 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:20Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.216678 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78ca0949b586a8839b02c0e01bd92571f3c4eaa156cfcd58546607c30cdbdcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:20Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.230116 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://367d830f42471f794953f9aa4cb979d9c3dac3ef0f2364661058c50cc1b0d055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f6091f13d04f859b29636375d3a27e07eeaee338f0da4a70b471f7f26250e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:20Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.246252 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:20Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.259167 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p9b2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a86c16e138e436ef3976c554ab3d70e39a7515f662e305558513a633db53142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2tjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p9b2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:20Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.268703 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.268739 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.268751 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.268768 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.268781 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:20Z","lastTransitionTime":"2026-02-16T13:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.278412 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67ca714-af04-4a76-8a28-54d47f66b1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzksg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:20Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.293414 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2hhp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"934e533e-cc26-4770-af67-3dbcaa0dab5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a9a856d7be88b316dbac827e3998da5df55019101f11f2e6e7d763a78c4a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2xc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2hhp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:20Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.305877 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c55e49a-a30d-4950-a690-c33d9f8a31e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff320daab8e42aacfbe2cb2ef91f064cfd950840978ca9aaf3e11ea987dc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe6225f5f96d52b832cfbd8ecb30748dfcd053b9d95a512bfbe0c6c4aa7c0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6mn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:20Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.319759 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:20Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.371571 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.371608 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.371618 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.371634 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.371644 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:20Z","lastTransitionTime":"2026-02-16T13:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.435152 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:32:20 crc kubenswrapper[4812]: E0216 13:32:20.435384 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:32:28.435352947 +0000 UTC m=+37.499683658 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.473854 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.473904 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.473917 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.473939 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.473966 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:20Z","lastTransitionTime":"2026-02-16T13:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.536317 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.536393 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.536518 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.536571 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:32:20 crc kubenswrapper[4812]: E0216 13:32:20.536648 4812 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 13:32:20 crc kubenswrapper[4812]: E0216 13:32:20.536736 4812 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 13:32:20 crc kubenswrapper[4812]: E0216 13:32:20.536744 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 13:32:20 crc kubenswrapper[4812]: E0216 13:32:20.536868 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 13:32:20 crc kubenswrapper[4812]: E0216 13:32:20.536889 4812 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 13:32:20 crc kubenswrapper[4812]: E0216 13:32:20.536924 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 13:32:20 crc kubenswrapper[4812]: E0216 13:32:20.536965 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 13:32:20 crc kubenswrapper[4812]: E0216 13:32:20.536978 4812 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 13:32:20 crc kubenswrapper[4812]: E0216 13:32:20.536766 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 13:32:28.536739122 +0000 UTC m=+37.601069873 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 13:32:20 crc kubenswrapper[4812]: E0216 13:32:20.537071 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 13:32:28.537037141 +0000 UTC m=+37.601367852 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 13:32:20 crc kubenswrapper[4812]: E0216 13:32:20.537100 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 13:32:28.537090232 +0000 UTC m=+37.601420943 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 13:32:20 crc kubenswrapper[4812]: E0216 13:32:20.537116 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 13:32:28.537108433 +0000 UTC m=+37.601439144 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.577212 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.577260 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.577282 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.577300 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.577312 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:20Z","lastTransitionTime":"2026-02-16T13:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.680564 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.680604 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.680615 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.680630 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.680651 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:20Z","lastTransitionTime":"2026-02-16T13:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.784217 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.784707 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.784791 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.784874 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.784945 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:20Z","lastTransitionTime":"2026-02-16T13:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.858710 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 12:54:58.144494406 +0000 UTC Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.877960 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.878142 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:32:20 crc kubenswrapper[4812]: E0216 13:32:20.878370 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:32:20 crc kubenswrapper[4812]: E0216 13:32:20.878596 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.888997 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.889032 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.889041 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.889060 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.889070 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:20Z","lastTransitionTime":"2026-02-16T13:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.991213 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.991250 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.991261 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.991278 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:20 crc kubenswrapper[4812]: I0216 13:32:20.991297 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:20Z","lastTransitionTime":"2026-02-16T13:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.093637 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.093661 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.093669 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.093681 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.093689 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:21Z","lastTransitionTime":"2026-02-16T13:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.098554 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" event={"ID":"a67ca714-af04-4a76-8a28-54d47f66b1fa","Type":"ContainerStarted","Data":"37aada86aa6e3ba6a1e9a96535b2ccd737a2a1ae869f3369e6af88687c25e9e9"} Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.099587 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.099663 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.104434 4812 generic.go:334] "Generic (PLEG): container finished" podID="145eec20-9328-4b99-b0ec-4870b6761385" containerID="5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73" exitCode=0 Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.104489 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" event={"ID":"145eec20-9328-4b99-b0ec-4870b6761385","Type":"ContainerDied","Data":"5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73"} Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.112922 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:21Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.123791 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p9b2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a86c16e138e436ef3976c554ab3d70e39a7515f662e305558513a633db53142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2tjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p9b2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:21Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.126486 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.127298 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.141505 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67ca714-af04-4a76-8a28-54d47f66b1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37aada86aa6e3ba6a1e9a96535b2ccd737a2a1ae869f3369e6af88687c25e9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzksg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:21Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.159737 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b29081-a34f-4671-85a3-e1bc2b16d37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 13:32:11.932704 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 13:32:11.933194 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 13:32:11.934094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1719573121/tls.crt::/tmp/serving-cert-1719573121/tls.key\\\\\\\"\\\\nI0216 13:32:12.377062 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 13:32:12.379783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 13:32:12.379801 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 13:32:12.379819 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 13:32:12.379825 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 13:32:12.383118 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 13:32:12.383163 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 13:32:12.383168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 13:32:12.383182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 13:32:12.383185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 13:32:12.383188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 13:32:12.384888 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:21Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.172431 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78ca0949b586a8839b02c0e01bd92571f3c4eaa156cfcd58546607c30cdbdcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:21Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.186977 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://367d830f42471f794953f9aa4cb979d9c3dac3ef0f2364661058c50cc1b0d055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f6091f13d04f859b29636375d3a27e07eeaee338f0da4a70b471f7f26250e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:21Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.199649 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.199708 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.199724 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.199746 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.199762 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:21Z","lastTransitionTime":"2026-02-16T13:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.199803 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:21Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.213081 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2hhp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"934e533e-cc26-4770-af67-3dbcaa0dab5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a9a856d7be88b316dbac827e3998da5df55019101f11f2e6e7d763a78c4a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2xc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2hhp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:21Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.224900 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c55e49a-a30d-4950-a690-c33d9f8a31e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff320daab8e42aacfbe2cb2ef91f064cfd950840978ca9aaf3e11ea987dc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe6225f5f96d52b832cfbd8ecb30748dfcd053b9d95a512bfbe0c6c4aa7c0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6mn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:21Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.239185 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f43b73-bad3-4883-ad0b-4a8df6824248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1001938f74f193628f1faf5fada6a26836e7053b9f46656912df4c17ac11a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5304f578aa4297d0e9e57669cb571d178fada6972553487f8f3e6a6a2422175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b778bd4c0e3d5d5158269f30b9646da76918d07d5a79e274b0b70ad1b4f38623\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94212f31a24fe0258d7c5241438e6ec1daf60b49bb87365fa851ae0a5628de1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:21Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.252467 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc49bb6ecf7f6197ff2e3e76e7e51bb0842fec86477b85c62d1930170ac91d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:21Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.266980 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"145eec20-9328-4b99-b0ec-4870b6761385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8g94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:21Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.280758 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:21Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.292546 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5w4kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f07f8fe-99f2-4f2e-b9f8-56841d756064\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6394bf9fe0ec8408dd8bfb0105dcc686f3a820850e75ce47db441f04d22c1c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5w4kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:21Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.301732 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.301761 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.301770 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.301783 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.301792 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:21Z","lastTransitionTime":"2026-02-16T13:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.306743 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f43b73-bad3-4883-ad0b-4a8df6824248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1001938f74f193628f1faf5fada6a26836e7053b9f46656912df4c17ac11a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5304f578aa4297d0e9e57669cb571d178fada6972553487f8f3e6a6a2422175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b778bd4c0e3d5d5158269f30b9646da76918d07d5a79e274b0b70ad1b4f38623\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94212f31a24fe0258d7c5241438e6ec1daf60b49bb87365fa851ae0a5628de1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:21Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.319097 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc49bb6ecf7f6197ff2e3e76e7e51bb0842fec86477b85c62d1930170ac91d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:21Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.331968 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"145eec20-9328-4b99-b0ec-4870b6761385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8g94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:21Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.343565 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:21Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.352457 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5w4kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f07f8fe-99f2-4f2e-b9f8-56841d756064\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6394bf9fe0ec8408dd8bfb0105dcc686f3a820850e75ce47db441f04d22c1c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5w4kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:21Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.364270 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b29081-a34f-4671-85a3-e1bc2b16d37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 13:32:11.932704 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 13:32:11.933194 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 13:32:11.934094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1719573121/tls.crt::/tmp/serving-cert-1719573121/tls.key\\\\\\\"\\\\nI0216 13:32:12.377062 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 13:32:12.379783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 13:32:12.379801 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 13:32:12.379819 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 13:32:12.379825 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 13:32:12.383118 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 13:32:12.383163 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 13:32:12.383168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 13:32:12.383182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 13:32:12.383185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 13:32:12.383188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 13:32:12.384888 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:21Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.375574 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78ca0949b586a8839b02c0e01bd92571f3c4eaa156cfcd58546607c30cdbdcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:21Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.388051 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://367d830f42471f794953f9aa4cb979d9c3dac3ef0f2364661058c50cc1b0d055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f6091f13d04f859b29636375d3a27e07eeaee338f0da4a70b471f7f26250e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:21Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.403479 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:21Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.404730 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.404772 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.404781 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.404798 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.404809 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:21Z","lastTransitionTime":"2026-02-16T13:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.413277 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p9b2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a86c16e138e436ef3976c554ab3d70e39a7515f662e305558513a633db53142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2tjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p9b2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:21Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.435420 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67ca714-af04-4a76-8a28-54d47f66b1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37aada86aa6e3ba6a1e9a96535b2ccd737a2a1ae869f3369e6af88687c25e9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzksg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:21Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.446763 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:21Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.457209 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2hhp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"934e533e-cc26-4770-af67-3dbcaa0dab5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a9a856d7be88b316dbac827e3998da5df55019101f11f2e6e7d763a78c4a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2xc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2hhp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:21Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.466570 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c55e49a-a30d-4950-a690-c33d9f8a31e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff320daab8e42aacfbe2cb2ef91f064cfd950840978ca9aaf3e11ea987dc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe6225f5f96d52b832cfbd8ecb30748dfcd053b9d95a512bfbe0c6c4aa7c0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6mn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:21Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.507603 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.507639 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.507647 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.507663 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.507674 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:21Z","lastTransitionTime":"2026-02-16T13:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.610135 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.610182 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.610193 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.610209 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.610220 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:21Z","lastTransitionTime":"2026-02-16T13:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.666003 4812 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.713172 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.713217 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.713228 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.713244 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.713254 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:21Z","lastTransitionTime":"2026-02-16T13:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.815866 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.816149 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.816158 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.816174 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.816183 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:21Z","lastTransitionTime":"2026-02-16T13:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.859383 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 20:43:39.675996068 +0000 UTC Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.878079 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:32:21 crc kubenswrapper[4812]: E0216 13:32:21.878229 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.897789 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:21Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.909265 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5w4kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f07f8fe-99f2-4f2e-b9f8-56841d756064\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6394bf9fe0ec8408dd8bfb0105dcc686f3a820850e75ce47db441f04d22c1c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5w4kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:21Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.919156 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.919203 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.919212 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.919227 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.919237 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:21Z","lastTransitionTime":"2026-02-16T13:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.920329 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p9b2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a86c16e138e436ef3976c554ab3d70e39a7515f662e305558513a633db53142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2tjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p9b2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:21Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.938281 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67ca714-af04-4a76-8a28-54d47f66b1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37aada86aa6e3ba6a1e9a96535b2ccd737a2a1ae869f3369e6af88687c25e9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzksg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:21Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.957512 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b29081-a34f-4671-85a3-e1bc2b16d37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 13:32:11.932704 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 13:32:11.933194 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 13:32:11.934094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1719573121/tls.crt::/tmp/serving-cert-1719573121/tls.key\\\\\\\"\\\\nI0216 13:32:12.377062 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 13:32:12.379783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 13:32:12.379801 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 13:32:12.379819 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 13:32:12.379825 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 13:32:12.383118 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 13:32:12.383163 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 13:32:12.383168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 13:32:12.383182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 13:32:12.383185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 13:32:12.383188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 13:32:12.384888 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:21Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.971333 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78ca0949b586a8839b02c0e01bd92571f3c4eaa156cfcd58546607c30cdbdcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:21Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:21 crc kubenswrapper[4812]: I0216 13:32:21.988160 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://367d830f42471f794953f9aa4cb979d9c3dac3ef0f2364661058c50cc1b0d055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f6091f13d04f859b29636375d3a27e07eeaee338f0da4a70b471f7f26250e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:21Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.003299 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:22Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.017856 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:22Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.021189 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.021247 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.021259 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.021303 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.021322 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:22Z","lastTransitionTime":"2026-02-16T13:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.030269 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2hhp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"934e533e-cc26-4770-af67-3dbcaa0dab5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a9a856d7be88b316dbac827e3998da5df55019101f11f2e6e7d763a78c4a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2xc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2hhp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:22Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.051636 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c55e49a-a30d-4950-a690-c33d9f8a31e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff320daab8e42aacfbe2cb2ef91f064cfd950840978ca9aaf3e11ea987dc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe6225f5f96d52b832cfbd8ecb30748dfcd053b9d95a512bfbe0c6c4aa7c0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6mn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:22Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.065492 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f43b73-bad3-4883-ad0b-4a8df6824248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1001938f74f193628f1faf5fada6a26836e7053b9f46656912df4c17ac11a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5304f578aa4297d0e9e57669cb571d178fada6972553487f8f3e6a6a2422175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b778bd4c0e3d5d5158269f30b9646da76918d07d5a79e274b0b70ad1b4f38623\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94212f31a24fe0258d7c5241438e6ec1daf60b49bb87365fa851ae0a5628de1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:22Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.079050 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc49bb6ecf7f6197ff2e3e76e7e51bb0842fec86477b85c62d1930170ac91d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:22Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.093931 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"145eec20-9328-4b99-b0ec-4870b6761385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8g94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:22Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.109888 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" event={"ID":"145eec20-9328-4b99-b0ec-4870b6761385","Type":"ContainerStarted","Data":"6366360c4a0ce6a9882aa517e034c45ef65a6ae8bf19f797f3ddaa99b8a76469"} Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.109997 4812 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.123501 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.123541 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.123550 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.123564 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.123575 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:22Z","lastTransitionTime":"2026-02-16T13:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.129097 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67ca714-af04-4a76-8a28-54d47f66b1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37aada86aa6e3ba6a1e9a96535b2ccd737a2a1ae869f3369e6af88687c25e9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzksg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:22Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.144550 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b29081-a34f-4671-85a3-e1bc2b16d37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 13:32:11.932704 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 13:32:11.933194 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 13:32:11.934094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1719573121/tls.crt::/tmp/serving-cert-1719573121/tls.key\\\\\\\"\\\\nI0216 13:32:12.377062 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 13:32:12.379783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 13:32:12.379801 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 13:32:12.379819 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 13:32:12.379825 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 13:32:12.383118 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 13:32:12.383163 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 13:32:12.383168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 13:32:12.383182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 13:32:12.383185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 13:32:12.383188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 13:32:12.384888 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:22Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.155637 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78ca0949b586a8839b02c0e01bd92571f3c4eaa156cfcd58546607c30cdbdcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:22Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.167222 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://367d830f42471f794953f9aa4cb979d9c3dac3ef0f2364661058c50cc1b0d055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f6091f13d04f859b29636375d3a27e07eeaee338f0da4a70b471f7f26250e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:22Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.180069 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:22Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.191237 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p9b2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a86c16e138e436ef3976c554ab3d70e39a7515f662e305558513a633db53142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2tjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p9b2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:22Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.205260 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:22Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.219584 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2hhp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"934e533e-cc26-4770-af67-3dbcaa0dab5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a9a856d7be88b316dbac827e3998da5df55019101f11f2e6e7d763a78c4a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2xc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2hhp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:22Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.226038 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.226089 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.226102 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.226120 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.226136 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:22Z","lastTransitionTime":"2026-02-16T13:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.234497 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c55e49a-a30d-4950-a690-c33d9f8a31e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff320daab8e42aacfbe2cb2ef91f064cfd950840978ca9aaf3e11ea987dc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe6225f5f96d52b832cfbd8ecb30748dfcd053b9d95a512bfbe0c6c4aa7c0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6mn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:22Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.247865 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f43b73-bad3-4883-ad0b-4a8df6824248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1001938f74f193628f1faf5fada6a26836e7053b9f46656912df4c17ac11a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5304f578aa4297d0e9e57669cb571d178fada6972553487f8f3e6a6a2422175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b778bd4c0e3d5d5158269f30b9646da76918d07d5a79e274b0b70ad1b4f38623\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94212f31a24fe0258d7c5241438e6ec1daf60b49bb87365fa851ae0a5628de1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:22Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.261860 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc49bb6ecf7f6197ff2e3e76e7e51bb0842fec86477b85c62d1930170ac91d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:22Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.275042 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"145eec20-9328-4b99-b0ec-4870b6761385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6366360c4a0ce6a9882aa517e034c45ef65a6ae8bf19f797f3ddaa99b8a76469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8g94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:22Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.285751 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:22Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.293595 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5w4kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f07f8fe-99f2-4f2e-b9f8-56841d756064\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6394bf9fe0ec8408dd8bfb0105dcc686f3a820850e75ce47db441f04d22c1c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5w4kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:22Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.328687 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.328734 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.328747 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.328765 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.328776 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:22Z","lastTransitionTime":"2026-02-16T13:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.430854 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.430909 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.430920 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.430934 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.430943 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:22Z","lastTransitionTime":"2026-02-16T13:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.533978 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.534049 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.534071 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.534100 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.534121 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:22Z","lastTransitionTime":"2026-02-16T13:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.636853 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.636907 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.636917 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.636931 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.636940 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:22Z","lastTransitionTime":"2026-02-16T13:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.738671 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.738706 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.738715 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.738729 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.738737 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:22Z","lastTransitionTime":"2026-02-16T13:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.841472 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.841639 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.841661 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.841684 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.841699 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:22Z","lastTransitionTime":"2026-02-16T13:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.859974 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 13:35:11.908165542 +0000 UTC Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.878521 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.878599 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:32:22 crc kubenswrapper[4812]: E0216 13:32:22.878646 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:32:22 crc kubenswrapper[4812]: E0216 13:32:22.878765 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.944414 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.944489 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.944501 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.944516 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:22 crc kubenswrapper[4812]: I0216 13:32:22.944528 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:22Z","lastTransitionTime":"2026-02-16T13:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.047178 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.047237 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.047249 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.047267 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.047279 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:23Z","lastTransitionTime":"2026-02-16T13:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.112960 4812 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.149709 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.149736 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.149745 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.149758 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.149768 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:23Z","lastTransitionTime":"2026-02-16T13:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.252223 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.252258 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.252266 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.252279 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.252287 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:23Z","lastTransitionTime":"2026-02-16T13:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.355049 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.355094 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.355107 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.355124 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.355136 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:23Z","lastTransitionTime":"2026-02-16T13:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.457719 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.457762 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.457775 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.457793 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.457805 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:23Z","lastTransitionTime":"2026-02-16T13:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.560320 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.560363 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.560374 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.560391 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.560403 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:23Z","lastTransitionTime":"2026-02-16T13:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.562570 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.563175 4812 scope.go:117] "RemoveContainer" containerID="4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.582104 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:23Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.600093 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2hhp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"934e533e-cc26-4770-af67-3dbcaa0dab5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a9a856d7be88b316dbac827e3998da5df55019101f11f2e6e7d763a78c4a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2xc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2hhp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:23Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.615441 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c55e49a-a30d-4950-a690-c33d9f8a31e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff320daab8e42aacfbe2cb2ef91f064cfd950840978ca9aaf3e11ea987dc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe6225f5f96d52b832cfbd8ecb30748dfcd053b9d95a512bfbe0c6c4aa7c0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6mn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:23Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.635572 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f43b73-bad3-4883-ad0b-4a8df6824248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1001938f74f193628f1faf5fada6a26836e7053b9f46656912df4c17ac11a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5304f578aa4297d0e9e57669cb571d178fada6972553487f8f3e6a6a2422175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b778bd4c0e3d5d5158269f30b9646da76918d07d5a79e274b0b70ad1b4f38623\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94212f31a24fe0258d7c5241438e6ec1daf60b49bb87365fa851ae0a5628de1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:23Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.650042 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc49bb6ecf7f6197ff2e3e76e7e51bb0842fec86477b85c62d1930170ac91d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:23Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.662733 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.662820 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.662833 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.662851 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.662862 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:23Z","lastTransitionTime":"2026-02-16T13:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.664879 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"145eec20-9328-4b99-b0ec-4870b6761385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6366360c4a0ce6a9882aa517e034c45ef65a6ae8bf19f797f3ddaa99b8a76469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8g94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:23Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.676055 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:23Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.687748 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5w4kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f07f8fe-99f2-4f2e-b9f8-56841d756064\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6394bf9fe0ec8408dd8bfb0105dcc686f3a820850e75ce47db441f04d22c1c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5w4kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:23Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.705062 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b29081-a34f-4671-85a3-e1bc2b16d37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 13:32:11.932704 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 13:32:11.933194 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 13:32:11.934094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1719573121/tls.crt::/tmp/serving-cert-1719573121/tls.key\\\\\\\"\\\\nI0216 13:32:12.377062 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 13:32:12.379783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 13:32:12.379801 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 13:32:12.379819 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 13:32:12.379825 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 13:32:12.383118 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 13:32:12.383163 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 13:32:12.383168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 13:32:12.383182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 13:32:12.383185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 13:32:12.383188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 13:32:12.384888 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:23Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.716980 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78ca0949b586a8839b02c0e01bd92571f3c4eaa156cfcd58546607c30cdbdcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:23Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.729689 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://367d830f42471f794953f9aa4cb979d9c3dac3ef0f2364661058c50cc1b0d055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f6091f13d04f859b29636375d3a27e07eeaee338f0da4a70b471f7f26250e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:23Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.742469 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:23Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.752157 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p9b2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a86c16e138e436ef3976c554ab3d70e39a7515f662e305558513a633db53142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2tjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p9b2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:23Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.764933 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.764967 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.764975 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.764988 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.764996 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:23Z","lastTransitionTime":"2026-02-16T13:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.770360 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67ca714-af04-4a76-8a28-54d47f66b1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37aada86aa6e3ba6a1e9a96535b2ccd737a2a1ae869f3369e6af88687c25e9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzksg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:23Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.861168 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 22:42:53.840821526 +0000 UTC Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.866770 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.866805 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.866814 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.866829 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.866839 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:23Z","lastTransitionTime":"2026-02-16T13:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.878424 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:32:23 crc kubenswrapper[4812]: E0216 13:32:23.878581 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.969422 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.969474 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.969485 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.969500 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:23 crc kubenswrapper[4812]: I0216 13:32:23.969511 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:23Z","lastTransitionTime":"2026-02-16T13:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.071916 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.072308 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.072320 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.072336 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.072361 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:24Z","lastTransitionTime":"2026-02-16T13:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.118615 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.120326 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9f92165c7178abd9e054861a9eb371353d050b922ba46c2a29027462832abff6"} Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.120737 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.122222 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pzksg_a67ca714-af04-4a76-8a28-54d47f66b1fa/ovnkube-controller/0.log" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.129514 4812 generic.go:334] "Generic (PLEG): container finished" podID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerID="37aada86aa6e3ba6a1e9a96535b2ccd737a2a1ae869f3369e6af88687c25e9e9" exitCode=1 Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.129547 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" event={"ID":"a67ca714-af04-4a76-8a28-54d47f66b1fa","Type":"ContainerDied","Data":"37aada86aa6e3ba6a1e9a96535b2ccd737a2a1ae869f3369e6af88687c25e9e9"} Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.130043 4812 scope.go:117] "RemoveContainer" containerID="37aada86aa6e3ba6a1e9a96535b2ccd737a2a1ae869f3369e6af88687c25e9e9" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.133523 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc49bb6ecf7f6197ff2e3e76e7e51bb0842fec86477b85c62d1930170ac91d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:24Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.147907 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"145eec20-9328-4b99-b0ec-4870b6761385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6366360c4a0ce6a9882aa517e034c45ef65a6ae8bf19f797f3ddaa99b8a76469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8g94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:24Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.168169 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f43b73-bad3-4883-ad0b-4a8df6824248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1001938f74f193628f1faf5fada6a26836e7053b9f46656912df4c17ac11a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5304f578aa4297d0e9e57669cb571d178fada6972553487f8f3e6a6a2422175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b778bd4c0e3d5d5158269f30b9646da76918d07d5a79e274b0b70ad1b4f38623\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94212f31a24fe0258d7c5241438e6ec1daf60b49bb87365fa851ae0a5628de1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:24Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.175144 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.175174 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.175183 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.175197 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.175213 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:24Z","lastTransitionTime":"2026-02-16T13:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.177238 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5w4kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f07f8fe-99f2-4f2e-b9f8-56841d756064\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6394bf9fe0ec8408dd8bfb0105dcc686f3a820850e75ce47db441f04d22c1c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5w4kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:24Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.189845 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:24Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.200359 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78ca0949b586a8839b02c0e01bd92571f3c4eaa156cfcd58546607c30cdbdcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:24Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.213422 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://367d830f42471f794953f9aa4cb979d9c3dac3ef0f2364661058c50cc1b0d055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f6091f13d04f859b29636375d3a27e07eeaee338f0da4a70b471f7f26250e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:24Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.225536 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:24Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.236933 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p9b2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a86c16e138e436ef3976c554ab3d70e39a7515f662e305558513a633db53142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2tjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p9b2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:24Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.257776 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67ca714-af04-4a76-8a28-54d47f66b1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37aada86aa6e3ba6a1e9a96535b2ccd737a2a1ae869f3369e6af88687c25e9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzksg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:24Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.272492 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b29081-a34f-4671-85a3-e1bc2b16d37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f92165c7178abd9e054861a9eb371353d050b922ba46c2a29027462832abff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 13:32:11.932704 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 13:32:11.933194 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 13:32:11.934094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1719573121/tls.crt::/tmp/serving-cert-1719573121/tls.key\\\\\\\"\\\\nI0216 13:32:12.377062 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 13:32:12.379783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 13:32:12.379801 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 13:32:12.379819 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 13:32:12.379825 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 13:32:12.383118 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 13:32:12.383163 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 13:32:12.383168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 13:32:12.383182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 13:32:12.383185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 13:32:12.383188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 13:32:12.384888 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:24Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.276973 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.277009 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.277020 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.277034 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.277042 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:24Z","lastTransitionTime":"2026-02-16T13:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.288812 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c55e49a-a30d-4950-a690-c33d9f8a31e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff320daab8e42aacfbe2cb2ef91f064cfd950840978ca9aaf3e11ea987dc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe6225f5f96d52b832cfbd8ecb30748dfcd053b9d95a512bfbe0c6c4aa7c0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6mn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:24Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.303826 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:24Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.319944 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2hhp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"934e533e-cc26-4770-af67-3dbcaa0dab5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a9a856d7be88b316dbac827e3998da5df55019101f11f2e6e7d763a78c4a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2xc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2hhp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:24Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.330657 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f43b73-bad3-4883-ad0b-4a8df6824248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1001938f74f193628f1faf5fada6a26836e7053b9f46656912df4c17ac11a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5304f578aa4297d0e9e57669cb571d178fada6972553487f8f3e6a6a2422175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b778bd4c0e3d5d5158269f30b9646da76918d07d5a79e274b0b70ad1b4f38623\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94212f31a24fe0258d7c5241438e6ec1daf60b49bb87365fa851ae0a5628de1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:24Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.343242 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc49bb6ecf7f6197ff2e3e76e7e51bb0842fec86477b85c62d1930170ac91d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:24Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.356063 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"145eec20-9328-4b99-b0ec-4870b6761385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6366360c4a0ce6a9882aa517e034c45ef65a6ae8bf19f797f3ddaa99b8a76469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8g94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:24Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.372306 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:24Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.379471 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.379505 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.379514 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.379531 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.379543 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:24Z","lastTransitionTime":"2026-02-16T13:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.381654 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5w4kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f07f8fe-99f2-4f2e-b9f8-56841d756064\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6394bf9fe0ec8408dd8bfb0105dcc686f3a820850e75ce47db441f04d22c1c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5w4kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:24Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.392893 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b29081-a34f-4671-85a3-e1bc2b16d37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f92165c7178abd9e054861a9eb371353d050b922ba46c2a29027462832abff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 13:32:11.932704 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 13:32:11.933194 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 13:32:11.934094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1719573121/tls.crt::/tmp/serving-cert-1719573121/tls.key\\\\\\\"\\\\nI0216 13:32:12.377062 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 13:32:12.379783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 13:32:12.379801 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 13:32:12.379819 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 13:32:12.379825 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 13:32:12.383118 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 13:32:12.383163 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 13:32:12.383168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 13:32:12.383182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 13:32:12.383185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 13:32:12.383188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 13:32:12.384888 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:24Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.403469 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78ca0949b586a8839b02c0e01bd92571f3c4eaa156cfcd58546607c30cdbdcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:24Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.415608 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://367d830f42471f794953f9aa4cb979d9c3dac3ef0f2364661058c50cc1b0d055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f6091f13d04f859b29636375d3a27e07eeaee338f0da4a70b471f7f26250e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:24Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.425499 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:24Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.433476 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p9b2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a86c16e138e436ef3976c554ab3d70e39a7515f662e305558513a633db53142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2tjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p9b2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:24Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.449078 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67ca714-af04-4a76-8a28-54d47f66b1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37aada86aa6e3ba6a1e9a96535b2ccd737a2a1ae869f3369e6af88687c25e9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37aada86aa6e3ba6a1e9a96535b2ccd737a2a1ae869f3369e6af88687c25e9e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T13:32:23Z\\\",\\\"message\\\":\\\"1] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 13:32:23.959459 6106 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 13:32:23.959527 6106 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 13:32:23.959670 6106 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0216 13:32:23.960189 6106 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 13:32:23.960218 6106 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 13:32:23.960224 6106 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 13:32:23.960241 6106 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 13:32:23.960245 6106 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 13:32:23.960267 6106 factory.go:656] Stopping watch factory\\\\nI0216 13:32:23.960277 6106 ovnkube.go:599] Stopped ovnkube\\\\nI0216 13:32:23.960294 6106 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 13:32:23.960300 6106 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 13:32:23.960306 6106 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzksg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:24Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.461868 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:24Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.482502 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2hhp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"934e533e-cc26-4770-af67-3dbcaa0dab5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a9a856d7be88b316dbac827e3998da5df55019101f11f2e6e7d763a78c4a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2xc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2hhp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:24Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.483308 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.483354 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.483367 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.483386 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.483399 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:24Z","lastTransitionTime":"2026-02-16T13:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.494747 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c55e49a-a30d-4950-a690-c33d9f8a31e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff320daab8e42aacfbe2cb2ef91f064cfd950840978ca9aaf3e11ea987dc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe6225f5f96d52b832cfbd8ecb30748dfcd053b9d95a512bfbe0c6c4aa7c0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6mn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:24Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.585706 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.585737 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.585745 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.585758 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.585768 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:24Z","lastTransitionTime":"2026-02-16T13:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.688407 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.688474 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.688486 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.688505 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.688517 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:24Z","lastTransitionTime":"2026-02-16T13:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.790508 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.790551 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.790562 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.790578 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.790590 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:24Z","lastTransitionTime":"2026-02-16T13:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.862290 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 00:13:09.612796204 +0000 UTC Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.878405 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.878418 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:32:24 crc kubenswrapper[4812]: E0216 13:32:24.878539 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:32:24 crc kubenswrapper[4812]: E0216 13:32:24.878639 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.892676 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.892712 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.892722 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.892735 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.892745 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:24Z","lastTransitionTime":"2026-02-16T13:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.995208 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.995237 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.995247 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.995261 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:24 crc kubenswrapper[4812]: I0216 13:32:24.995270 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:24Z","lastTransitionTime":"2026-02-16T13:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.097153 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.097391 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.097470 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.097545 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.097609 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:25Z","lastTransitionTime":"2026-02-16T13:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.135619 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pzksg_a67ca714-af04-4a76-8a28-54d47f66b1fa/ovnkube-controller/1.log" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.136487 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pzksg_a67ca714-af04-4a76-8a28-54d47f66b1fa/ovnkube-controller/0.log" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.139095 4812 generic.go:334] "Generic (PLEG): container finished" podID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerID="85d3dc0a8dee9acb7c22340a3bf0d3b957f66b87aafd9d79d65ad740e1b3e73f" exitCode=1 Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.139164 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" event={"ID":"a67ca714-af04-4a76-8a28-54d47f66b1fa","Type":"ContainerDied","Data":"85d3dc0a8dee9acb7c22340a3bf0d3b957f66b87aafd9d79d65ad740e1b3e73f"} Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.139398 4812 scope.go:117] "RemoveContainer" containerID="37aada86aa6e3ba6a1e9a96535b2ccd737a2a1ae869f3369e6af88687c25e9e9" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.140294 4812 scope.go:117] "RemoveContainer" containerID="85d3dc0a8dee9acb7c22340a3bf0d3b957f66b87aafd9d79d65ad740e1b3e73f" Feb 16 13:32:25 crc kubenswrapper[4812]: E0216 13:32:25.140564 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pzksg_openshift-ovn-kubernetes(a67ca714-af04-4a76-8a28-54d47f66b1fa)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.158351 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b29081-a34f-4671-85a3-e1bc2b16d37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f92165c7178abd9e054861a9eb371353d050b922ba46c2a29027462832abff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 13:32:11.932704 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 13:32:11.933194 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 13:32:11.934094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1719573121/tls.crt::/tmp/serving-cert-1719573121/tls.key\\\\\\\"\\\\nI0216 13:32:12.377062 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 13:32:12.379783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 13:32:12.379801 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 13:32:12.379819 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 13:32:12.379825 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 13:32:12.383118 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 13:32:12.383163 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 13:32:12.383168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 13:32:12.383182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 13:32:12.383185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 13:32:12.383188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 13:32:12.384888 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.170456 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78ca0949b586a8839b02c0e01bd92571f3c4eaa156cfcd58546607c30cdbdcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.182542 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://367d830f42471f794953f9aa4cb979d9c3dac3ef0f2364661058c50cc1b0d055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f6091f13d04f859b29636375d3a27e07eeaee338f0da4a70b471f7f26250e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.195567 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.199586 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.199628 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.199637 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.199651 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.199661 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:25Z","lastTransitionTime":"2026-02-16T13:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.205211 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p9b2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a86c16e138e436ef3976c554ab3d70e39a7515f662e305558513a633db53142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2tjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p9b2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.223658 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67ca714-af04-4a76-8a28-54d47f66b1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d3dc0a8dee9acb7c22340a3bf0d3b957f66b87aafd9d79d65ad740e1b3e73f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37aada86aa6e3ba6a1e9a96535b2ccd737a2a1ae869f3369e6af88687c25e9e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T13:32:23Z\\\",\\\"message\\\":\\\"1] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 13:32:23.959459 6106 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 13:32:23.959527 6106 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 13:32:23.959670 6106 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0216 13:32:23.960189 6106 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 13:32:23.960218 6106 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 13:32:23.960224 6106 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 13:32:23.960241 6106 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 13:32:23.960245 6106 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 13:32:23.960267 6106 factory.go:656] Stopping watch factory\\\\nI0216 13:32:23.960277 6106 ovnkube.go:599] Stopped ovnkube\\\\nI0216 13:32:23.960294 6106 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 13:32:23.960300 6106 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 13:32:23.960306 6106 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d3dc0a8dee9acb7c22340a3bf0d3b957f66b87aafd9d79d65ad740e1b3e73f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T13:32:25Z\\\",\\\"message\\\":\\\".176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 13:32:25.052962 6255 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-5w4kf\\\\nI0216 13:32:25.052979 6255 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0216 13:32:25.052998 6255 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nF0216 13:32:25.053003 6255 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:25Z is\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzksg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.237095 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.249548 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2hhp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"934e533e-cc26-4770-af67-3dbcaa0dab5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a9a856d7be88b316dbac827e3998da5df55019101f11f2e6e7d763a78c4a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2xc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2hhp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.260987 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c55e49a-a30d-4950-a690-c33d9f8a31e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff320daab8e42aacfbe2cb2ef91f064cfd950840978ca9aaf3e11ea987dc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe6225f5f96d52b832cfbd8ecb30748dfcd053b9d95a512bfbe0c6c4aa7c0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6mn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.276158 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f43b73-bad3-4883-ad0b-4a8df6824248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1001938f74f193628f1faf5fada6a26836e7053b9f46656912df4c17ac11a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5304f578aa4297d0e9e57669cb571d178fada6972553487f8f3e6a6a2422175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b778bd4c0e3d5d5158269f30b9646da76918d07d5a79e274b0b70ad1b4f38623\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94212f31a24fe0258d7c5241438e6ec1daf60b49bb87365fa851ae0a5628de1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.297816 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc49bb6ecf7f6197ff2e3e76e7e51bb0842fec86477b85c62d1930170ac91d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.302043 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.302080 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.302090 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.302105 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.302116 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:25Z","lastTransitionTime":"2026-02-16T13:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.316146 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"145eec20-9328-4b99-b0ec-4870b6761385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6366360c4a0ce6a9882aa517e034c45ef65a6ae8bf19f797f3ddaa99b8a76469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8g94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.328907 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.341056 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5w4kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f07f8fe-99f2-4f2e-b9f8-56841d756064\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6394bf9fe0ec8408dd8bfb0105dcc686f3a820850e75ce47db441f04d22c1c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5w4kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:25Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.404232 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.404281 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.404292 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.404310 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.404322 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:25Z","lastTransitionTime":"2026-02-16T13:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.498687 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.506040 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.506067 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.506077 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.506089 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.506097 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:25Z","lastTransitionTime":"2026-02-16T13:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.608540 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.608590 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.608602 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.608619 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.608631 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:25Z","lastTransitionTime":"2026-02-16T13:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.710883 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.710926 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.710935 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.710949 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.710959 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:25Z","lastTransitionTime":"2026-02-16T13:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.813214 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.813241 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.813249 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.813262 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.813270 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:25Z","lastTransitionTime":"2026-02-16T13:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.862397 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 02:06:47.245861339 +0000 UTC Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.878952 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:32:25 crc kubenswrapper[4812]: E0216 13:32:25.879133 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.914985 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.915027 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.915045 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.915066 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:25 crc kubenswrapper[4812]: I0216 13:32:25.915081 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:25Z","lastTransitionTime":"2026-02-16T13:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.017035 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.017069 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.017077 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.017092 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.017102 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:26Z","lastTransitionTime":"2026-02-16T13:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.119191 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.119501 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.119579 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.119645 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.119717 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:26Z","lastTransitionTime":"2026-02-16T13:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.143390 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pzksg_a67ca714-af04-4a76-8a28-54d47f66b1fa/ovnkube-controller/1.log" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.146235 4812 scope.go:117] "RemoveContainer" containerID="85d3dc0a8dee9acb7c22340a3bf0d3b957f66b87aafd9d79d65ad740e1b3e73f" Feb 16 13:32:26 crc kubenswrapper[4812]: E0216 13:32:26.146366 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pzksg_openshift-ovn-kubernetes(a67ca714-af04-4a76-8a28-54d47f66b1fa)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.158887 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c55e49a-a30d-4950-a690-c33d9f8a31e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff320daab8e42aacfbe2cb2ef91f064cfd950840978ca9aaf3e11ea987dc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe6225f5f96d52b832cfbd8ecb30748dfcd053b9d95a512bfbe0c6c4aa7c0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6mn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.172479 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.185488 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2hhp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"934e533e-cc26-4770-af67-3dbcaa0dab5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a9a856d7be88b316dbac827e3998da5df55019101f11f2e6e7d763a78c4a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2xc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2hhp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.199269 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc49bb6ecf7f6197ff2e3e76e7e51bb0842fec86477b85c62d1930170ac91d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.215246 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"145eec20-9328-4b99-b0ec-4870b6761385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6366360c4a0ce6a9882aa517e034c45ef65a6ae8bf19f797f3ddaa99b8a76469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8g94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.222115 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.222147 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.222156 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.222169 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.222178 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:26Z","lastTransitionTime":"2026-02-16T13:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.228076 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f43b73-bad3-4883-ad0b-4a8df6824248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1001938f74f193628f1faf5fada6a26836e7053b9f46656912df4c17ac11a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5304f578aa4297d0e9e57669cb571d178fada6972553487f8f3e6a6a2422175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b778bd4c0e3d5d5158269f30b9646da76918d07d5a79e274b0b70ad1b4f38623\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94212f31a24fe0258d7c5241438e6ec1daf60b49bb87365fa851ae0a5628de1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.237213 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5w4kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f07f8fe-99f2-4f2e-b9f8-56841d756064\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6394bf9fe0ec8408dd8bfb0105dcc686f3a820850e75ce47db441f04d22c1c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5w4kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.246725 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.256201 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78ca0949b586a8839b02c0e01bd92571f3c4eaa156cfcd58546607c30cdbdcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.265908 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://367d830f42471f794953f9aa4cb979d9c3dac3ef0f2364661058c50cc1b0d055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f6091f13d04f859b29636375d3a27e07eeaee338f0da4a70b471f7f26250e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.276628 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.287826 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p9b2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a86c16e138e436ef3976c554ab3d70e39a7515f662e305558513a633db53142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2tjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p9b2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.307672 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67ca714-af04-4a76-8a28-54d47f66b1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d3dc0a8dee9acb7c22340a3bf0d3b957f66b87aafd9d79d65ad740e1b3e73f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d3dc0a8dee9acb7c22340a3bf0d3b957f66b87aafd9d79d65ad740e1b3e73f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T13:32:25Z\\\",\\\"message\\\":\\\".176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 13:32:25.052962 6255 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-5w4kf\\\\nI0216 13:32:25.052979 6255 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0216 13:32:25.052998 6255 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nF0216 13:32:25.053003 6255 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:25Z is\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pzksg_openshift-ovn-kubernetes(a67ca714-af04-4a76-8a28-54d47f66b1fa)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzksg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.320205 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b29081-a34f-4671-85a3-e1bc2b16d37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f92165c7178abd9e054861a9eb371353d050b922ba46c2a29027462832abff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 13:32:11.932704 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 13:32:11.933194 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 13:32:11.934094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1719573121/tls.crt::/tmp/serving-cert-1719573121/tls.key\\\\\\\"\\\\nI0216 13:32:12.377062 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 13:32:12.379783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 13:32:12.379801 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 13:32:12.379819 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 13:32:12.379825 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 13:32:12.383118 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 13:32:12.383163 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 13:32:12.383168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 13:32:12.383182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 13:32:12.383185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 13:32:12.383188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 13:32:12.384888 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.325328 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.325365 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.325375 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.325391 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.325405 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:26Z","lastTransitionTime":"2026-02-16T13:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.427370 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.427407 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.427417 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.427433 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.427470 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:26Z","lastTransitionTime":"2026-02-16T13:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.529698 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.529760 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.529788 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.529802 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.529812 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:26Z","lastTransitionTime":"2026-02-16T13:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.632726 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.632791 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.632802 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.632817 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.632826 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:26Z","lastTransitionTime":"2026-02-16T13:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.735793 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.735851 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.735872 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.735895 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.735912 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:26Z","lastTransitionTime":"2026-02-16T13:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.752902 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gt4zb"] Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.753362 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gt4zb" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.755549 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.756775 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.769334 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b29081-a34f-4671-85a3-e1bc2b16d37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f92165c7178abd9e054861a9eb371353d050b922ba46c2a29027462832abff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 13:32:11.932704 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 13:32:11.933194 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 13:32:11.934094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1719573121/tls.crt::/tmp/serving-cert-1719573121/tls.key\\\\\\\"\\\\nI0216 13:32:12.377062 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 13:32:12.379783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 13:32:12.379801 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 13:32:12.379819 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 13:32:12.379825 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 13:32:12.383118 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 13:32:12.383163 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 13:32:12.383168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 13:32:12.383182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 13:32:12.383185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 13:32:12.383188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 13:32:12.384888 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.784255 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78ca0949b586a8839b02c0e01bd92571f3c4eaa156cfcd58546607c30cdbdcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.796610 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://367d830f42471f794953f9aa4cb979d9c3dac3ef0f2364661058c50cc1b0d055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f6091f13d04f859b29636375d3a27e07eeaee338f0da4a70b471f7f26250e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.809791 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.821736 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p9b2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a86c16e138e436ef3976c554ab3d70e39a7515f662e305558513a633db53142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2tjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p9b2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.838673 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.838718 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.838734 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.838752 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.838764 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:26Z","lastTransitionTime":"2026-02-16T13:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.842377 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67ca714-af04-4a76-8a28-54d47f66b1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d3dc0a8dee9acb7c22340a3bf0d3b957f66b87aafd9d79d65ad740e1b3e73f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d3dc0a8dee9acb7c22340a3bf0d3b957f66b87aafd9d79d65ad740e1b3e73f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T13:32:25Z\\\",\\\"message\\\":\\\".176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 13:32:25.052962 6255 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-5w4kf\\\\nI0216 13:32:25.052979 6255 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0216 13:32:25.052998 6255 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nF0216 13:32:25.053003 6255 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:25Z is\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pzksg_openshift-ovn-kubernetes(a67ca714-af04-4a76-8a28-54d47f66b1fa)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzksg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.858371 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.863331 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 08:23:52.34045697 +0000 UTC Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.873422 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2hhp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"934e533e-cc26-4770-af67-3dbcaa0dab5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a9a856d7be88b316dbac827e3998da5df55019101f11f2e6e7d763a78c4a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2xc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2hhp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.878735 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.878810 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:32:26 crc kubenswrapper[4812]: E0216 13:32:26.878829 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:32:26 crc kubenswrapper[4812]: E0216 13:32:26.879054 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.885321 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c55e49a-a30d-4950-a690-c33d9f8a31e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff320daab8e42aacfbe2cb2ef91f064cfd950840978ca9aaf3e11ea987dc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe6225f5f96d52b832cfbd8ecb30748dfcd053b9d95a512bfbe0c6c4aa7c0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6mn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.897258 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f43b73-bad3-4883-ad0b-4a8df6824248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1001938f74f193628f1faf5fada6a26836e7053b9f46656912df4c17ac11a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5304f578aa4297d0e9e57669cb571d178fada6972553487f8f3e6a6a2422175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b778bd4c0e3d5d5158269f30b9646da76918d07d5a79e274b0b70ad1b4f38623\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94212f31a24fe0258d7c5241438e6ec1daf60b49bb87365fa851ae0a5628de1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.899640 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e823c28d-cc96-469c-a794-fb12a7ae6172-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gt4zb\" (UID: \"e823c28d-cc96-469c-a794-fb12a7ae6172\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gt4zb" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.899717 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e823c28d-cc96-469c-a794-fb12a7ae6172-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gt4zb\" (UID: \"e823c28d-cc96-469c-a794-fb12a7ae6172\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gt4zb" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.899740 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms9qd\" (UniqueName: \"kubernetes.io/projected/e823c28d-cc96-469c-a794-fb12a7ae6172-kube-api-access-ms9qd\") pod \"ovnkube-control-plane-749d76644c-gt4zb\" (UID: \"e823c28d-cc96-469c-a794-fb12a7ae6172\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gt4zb" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.899771 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e823c28d-cc96-469c-a794-fb12a7ae6172-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gt4zb\" (UID: \"e823c28d-cc96-469c-a794-fb12a7ae6172\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gt4zb" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.909742 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc49bb6ecf7f6197ff2e3e76e7e51bb0842fec86477b85c62d1930170ac91d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.924328 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"145eec20-9328-4b99-b0ec-4870b6761385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6366360c4a0ce6a9882aa517e034c45ef65a6ae8bf19f797f3ddaa99b8a76469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8g94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.935253 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gt4zb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e823c28d-cc96-469c-a794-fb12a7ae6172\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms9qd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms9qd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gt4zb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.941625 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.941683 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.941693 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.941707 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.941717 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:26Z","lastTransitionTime":"2026-02-16T13:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.948617 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:26 crc kubenswrapper[4812]: I0216 13:32:26.958812 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5w4kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f07f8fe-99f2-4f2e-b9f8-56841d756064\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6394bf9fe0ec8408dd8bfb0105dcc686f3a820850e75ce47db441f04d22c1c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5w4kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:26Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.000432 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e823c28d-cc96-469c-a794-fb12a7ae6172-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gt4zb\" (UID: \"e823c28d-cc96-469c-a794-fb12a7ae6172\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gt4zb" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.000494 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms9qd\" (UniqueName: \"kubernetes.io/projected/e823c28d-cc96-469c-a794-fb12a7ae6172-kube-api-access-ms9qd\") pod \"ovnkube-control-plane-749d76644c-gt4zb\" (UID: \"e823c28d-cc96-469c-a794-fb12a7ae6172\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gt4zb" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.000514 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e823c28d-cc96-469c-a794-fb12a7ae6172-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gt4zb\" (UID: \"e823c28d-cc96-469c-a794-fb12a7ae6172\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gt4zb" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.000545 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e823c28d-cc96-469c-a794-fb12a7ae6172-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gt4zb\" (UID: \"e823c28d-cc96-469c-a794-fb12a7ae6172\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gt4zb" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.000973 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e823c28d-cc96-469c-a794-fb12a7ae6172-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gt4zb\" (UID: \"e823c28d-cc96-469c-a794-fb12a7ae6172\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gt4zb" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.001588 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e823c28d-cc96-469c-a794-fb12a7ae6172-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gt4zb\" (UID: \"e823c28d-cc96-469c-a794-fb12a7ae6172\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gt4zb" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.005612 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e823c28d-cc96-469c-a794-fb12a7ae6172-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gt4zb\" (UID: \"e823c28d-cc96-469c-a794-fb12a7ae6172\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gt4zb" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.015173 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms9qd\" (UniqueName: \"kubernetes.io/projected/e823c28d-cc96-469c-a794-fb12a7ae6172-kube-api-access-ms9qd\") pod \"ovnkube-control-plane-749d76644c-gt4zb\" (UID: \"e823c28d-cc96-469c-a794-fb12a7ae6172\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gt4zb" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.043932 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.044275 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.044491 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.044641 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.044792 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:27Z","lastTransitionTime":"2026-02-16T13:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.067165 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gt4zb" Feb 16 13:32:27 crc kubenswrapper[4812]: W0216 13:32:27.082918 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode823c28d_cc96_469c_a794_fb12a7ae6172.slice/crio-7c588758718b0bea0b62a7a88c5771e75d280877c2517a6c7f5dac53180dc293 WatchSource:0}: Error finding container 7c588758718b0bea0b62a7a88c5771e75d280877c2517a6c7f5dac53180dc293: Status 404 returned error can't find the container with id 7c588758718b0bea0b62a7a88c5771e75d280877c2517a6c7f5dac53180dc293 Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.146904 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.146934 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.146942 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.146957 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.146965 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:27Z","lastTransitionTime":"2026-02-16T13:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.148731 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gt4zb" event={"ID":"e823c28d-cc96-469c-a794-fb12a7ae6172","Type":"ContainerStarted","Data":"7c588758718b0bea0b62a7a88c5771e75d280877c2517a6c7f5dac53180dc293"} Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.249700 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.249743 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.249757 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.249776 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.249789 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:27Z","lastTransitionTime":"2026-02-16T13:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.352963 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.353003 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.353015 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.353034 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.353045 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:27Z","lastTransitionTime":"2026-02-16T13:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.457422 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.457987 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.457999 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.458020 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.458033 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:27Z","lastTransitionTime":"2026-02-16T13:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.561168 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.561215 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.561224 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.561239 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.561248 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:27Z","lastTransitionTime":"2026-02-16T13:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.663723 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.663762 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.663773 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.663789 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.663801 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:27Z","lastTransitionTime":"2026-02-16T13:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.766606 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.766646 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.766655 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.766669 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.766678 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:27Z","lastTransitionTime":"2026-02-16T13:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.858812 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-szt79"] Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.859233 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:32:27 crc kubenswrapper[4812]: E0216 13:32:27.859303 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.863961 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 13:49:40.381052122 +0000 UTC Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.868726 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.868833 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.868864 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.868881 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.868894 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:27Z","lastTransitionTime":"2026-02-16T13:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.871971 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:27Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.878537 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:32:27 crc kubenswrapper[4812]: E0216 13:32:27.878679 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.884338 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2hhp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"934e533e-cc26-4770-af67-3dbcaa0dab5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a9a856d7be88b316dbac827e3998da5df55019101f11f2e6e7d763a78c4a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2xc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2hhp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:27Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.896738 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c55e49a-a30d-4950-a690-c33d9f8a31e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff320daab8e42aacfbe2cb2ef91f064cfd950840978ca9aaf3e11ea987dc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe6225f5f96d52b832cfbd8ecb30748dfcd053b9d95a512bfbe0c6c4aa7c0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6mn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:27Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.910743 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f43b73-bad3-4883-ad0b-4a8df6824248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1001938f74f193628f1faf5fada6a26836e7053b9f46656912df4c17ac11a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5304f578aa4297d0e9e57669cb571d178fada6972553487f8f3e6a6a2422175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b778bd4c0e3d5d5158269f30b9646da76918d07d5a79e274b0b70ad1b4f38623\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94212f31a24fe0258d7c5241438e6ec1daf60b49bb87365fa851ae0a5628de1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:27Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.923061 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc49bb6ecf7f6197ff2e3e76e7e51bb0842fec86477b85c62d1930170ac91d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:27Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.935255 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"145eec20-9328-4b99-b0ec-4870b6761385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6366360c4a0ce6a9882aa517e034c45ef65a6ae8bf19f797f3ddaa99b8a76469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8g94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:27Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.946570 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gt4zb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e823c28d-cc96-469c-a794-fb12a7ae6172\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms9qd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms9qd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gt4zb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:27Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.960764 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:27Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.970255 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5w4kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f07f8fe-99f2-4f2e-b9f8-56841d756064\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6394bf9fe0ec8408dd8bfb0105dcc686f3a820850e75ce47db441f04d22c1c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5w4kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:27Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.971257 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.971288 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.971299 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.971316 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.971330 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:27Z","lastTransitionTime":"2026-02-16T13:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.979906 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p9b2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a86c16e138e436ef3976c554ab3d70e39a7515f662e305558513a633db53142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2tjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p9b2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:27Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:27 crc kubenswrapper[4812]: I0216 13:32:27.997275 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67ca714-af04-4a76-8a28-54d47f66b1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d3dc0a8dee9acb7c22340a3bf0d3b957f66b87aafd9d79d65ad740e1b3e73f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d3dc0a8dee9acb7c22340a3bf0d3b957f66b87aafd9d79d65ad740e1b3e73f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T13:32:25Z\\\",\\\"message\\\":\\\".176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 13:32:25.052962 6255 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-5w4kf\\\\nI0216 13:32:25.052979 6255 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0216 13:32:25.052998 6255 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nF0216 13:32:25.053003 6255 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:25Z is\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pzksg_openshift-ovn-kubernetes(a67ca714-af04-4a76-8a28-54d47f66b1fa)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzksg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:27Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.009358 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-szt79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a1f0c6-cafa-4c67-a2ad-d6003e88613c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md4ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md4ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-szt79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:28Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.009921 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d2a1f0c6-cafa-4c67-a2ad-d6003e88613c-metrics-certs\") pod \"network-metrics-daemon-szt79\" (UID: \"d2a1f0c6-cafa-4c67-a2ad-d6003e88613c\") " pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.009949 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md4ss\" (UniqueName: \"kubernetes.io/projected/d2a1f0c6-cafa-4c67-a2ad-d6003e88613c-kube-api-access-md4ss\") pod \"network-metrics-daemon-szt79\" (UID: \"d2a1f0c6-cafa-4c67-a2ad-d6003e88613c\") " pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.023266 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b29081-a34f-4671-85a3-e1bc2b16d37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f92165c7178abd9e054861a9eb371353d050b922ba46c2a29027462832abff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 13:32:11.932704 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 13:32:11.933194 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 13:32:11.934094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1719573121/tls.crt::/tmp/serving-cert-1719573121/tls.key\\\\\\\"\\\\nI0216 13:32:12.377062 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 13:32:12.379783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 13:32:12.379801 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 13:32:12.379819 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 13:32:12.379825 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 13:32:12.383118 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 13:32:12.383163 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 13:32:12.383168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 13:32:12.383182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 13:32:12.383185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 13:32:12.383188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 13:32:12.384888 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:28Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.036984 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78ca0949b586a8839b02c0e01bd92571f3c4eaa156cfcd58546607c30cdbdcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:28Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.050326 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://367d830f42471f794953f9aa4cb979d9c3dac3ef0f2364661058c50cc1b0d055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f6091f13d04f859b29636375d3a27e07eeaee338f0da4a70b471f7f26250e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:28Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.066501 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:28Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.074345 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.074393 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.074406 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.074425 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.074438 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:28Z","lastTransitionTime":"2026-02-16T13:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.110916 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d2a1f0c6-cafa-4c67-a2ad-d6003e88613c-metrics-certs\") pod \"network-metrics-daemon-szt79\" (UID: \"d2a1f0c6-cafa-4c67-a2ad-d6003e88613c\") " pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.110965 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md4ss\" (UniqueName: \"kubernetes.io/projected/d2a1f0c6-cafa-4c67-a2ad-d6003e88613c-kube-api-access-md4ss\") pod \"network-metrics-daemon-szt79\" (UID: \"d2a1f0c6-cafa-4c67-a2ad-d6003e88613c\") " pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:32:28 crc kubenswrapper[4812]: E0216 13:32:28.111023 4812 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 13:32:28 crc kubenswrapper[4812]: E0216 13:32:28.111090 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2a1f0c6-cafa-4c67-a2ad-d6003e88613c-metrics-certs podName:d2a1f0c6-cafa-4c67-a2ad-d6003e88613c nodeName:}" failed. No retries permitted until 2026-02-16 13:32:28.61107178 +0000 UTC m=+37.675402481 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d2a1f0c6-cafa-4c67-a2ad-d6003e88613c-metrics-certs") pod "network-metrics-daemon-szt79" (UID: "d2a1f0c6-cafa-4c67-a2ad-d6003e88613c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.127384 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md4ss\" (UniqueName: \"kubernetes.io/projected/d2a1f0c6-cafa-4c67-a2ad-d6003e88613c-kube-api-access-md4ss\") pod \"network-metrics-daemon-szt79\" (UID: \"d2a1f0c6-cafa-4c67-a2ad-d6003e88613c\") " pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.152397 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gt4zb" event={"ID":"e823c28d-cc96-469c-a794-fb12a7ae6172","Type":"ContainerStarted","Data":"6f8618e97cfc7a7d6673243425c62cfd9633fca65005e96b63e924c303f7e5da"} Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.152487 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gt4zb" event={"ID":"e823c28d-cc96-469c-a794-fb12a7ae6172","Type":"ContainerStarted","Data":"d34a9809d2ade6a3b1cb69520313bd3886db5c666762ca9ce993b5251331fa7b"} Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.163782 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c55e49a-a30d-4950-a690-c33d9f8a31e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff320daab8e42aacfbe2cb2ef91f064cfd950840978ca9aaf3e11ea987dc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe6225f5f96d52b832cfbd8ecb30748dfcd053b9d95a512bfbe0c6c4aa7c0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6mn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:28Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.175188 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:28Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.176584 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.176631 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.176642 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.176654 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.176663 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:28Z","lastTransitionTime":"2026-02-16T13:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.191113 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2hhp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"934e533e-cc26-4770-af67-3dbcaa0dab5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a9a856d7be88b316dbac827e3998da5df55019101f11f2e6e7d763a78c4a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2xc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2hhp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:28Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.204721 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc49bb6ecf7f6197ff2e3e76e7e51bb0842fec86477b85c62d1930170ac91d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:28Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.217937 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"145eec20-9328-4b99-b0ec-4870b6761385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6366360c4a0ce6a9882aa517e034c45ef65a6ae8bf19f797f3ddaa99b8a76469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8g94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:28Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.227917 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gt4zb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e823c28d-cc96-469c-a794-fb12a7ae6172\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d34a9809d2ade6a3b1cb69520313bd3886db5c666762ca9ce993b5251331fa7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms9qd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f8618e97cfc7a7d6673243425c62cfd9633fca65005e96b63e924c303f7e5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms9qd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gt4zb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:28Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.243646 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f43b73-bad3-4883-ad0b-4a8df6824248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1001938f74f193628f1faf5fada6a26836e7053b9f46656912df4c17ac11a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5304f578aa4297d0e9e57669cb571d178fada6972553487f8f3e6a6a2422175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b778bd4c0e3d5d5158269f30b9646da76918d07d5a79e274b0b70ad1b4f38623\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94212f31a24fe0258d7c5241438e6ec1daf60b49bb87365fa851ae0a5628de1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:28Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.256773 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5w4kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f07f8fe-99f2-4f2e-b9f8-56841d756064\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6394bf9fe0ec8408dd8bfb0105dcc686f3a820850e75ce47db441f04d22c1c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5w4kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:28Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.270420 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:28Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.279810 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.279852 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.279865 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.279882 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.279894 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:28Z","lastTransitionTime":"2026-02-16T13:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.288568 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78ca0949b586a8839b02c0e01bd92571f3c4eaa156cfcd58546607c30cdbdcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:28Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.302664 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://367d830f42471f794953f9aa4cb979d9c3dac3ef0f2364661058c50cc1b0d055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f6091f13d04f859b29636375d3a27e07eeaee338f0da4a70b471f7f26250e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:28Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.317675 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:28Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.329956 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p9b2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a86c16e138e436ef3976c554ab3d70e39a7515f662e305558513a633db53142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2tjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p9b2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:28Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.351124 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67ca714-af04-4a76-8a28-54d47f66b1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d3dc0a8dee9acb7c22340a3bf0d3b957f66b87aafd9d79d65ad740e1b3e73f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d3dc0a8dee9acb7c22340a3bf0d3b957f66b87aafd9d79d65ad740e1b3e73f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T13:32:25Z\\\",\\\"message\\\":\\\".176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 13:32:25.052962 6255 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-5w4kf\\\\nI0216 13:32:25.052979 6255 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0216 13:32:25.052998 6255 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nF0216 13:32:25.053003 6255 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:25Z is\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pzksg_openshift-ovn-kubernetes(a67ca714-af04-4a76-8a28-54d47f66b1fa)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzksg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:28Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.362764 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-szt79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a1f0c6-cafa-4c67-a2ad-d6003e88613c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md4ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md4ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-szt79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:28Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.376999 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b29081-a34f-4671-85a3-e1bc2b16d37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f92165c7178abd9e054861a9eb371353d050b922ba46c2a29027462832abff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 13:32:11.932704 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 13:32:11.933194 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 13:32:11.934094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1719573121/tls.crt::/tmp/serving-cert-1719573121/tls.key\\\\\\\"\\\\nI0216 13:32:12.377062 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 13:32:12.379783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 13:32:12.379801 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 13:32:12.379819 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 13:32:12.379825 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 13:32:12.383118 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 13:32:12.383163 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 13:32:12.383168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 13:32:12.383182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 13:32:12.383185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 13:32:12.383188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 13:32:12.384888 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:28Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.382868 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.382922 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.382934 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.382952 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.382964 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:28Z","lastTransitionTime":"2026-02-16T13:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.485270 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.485325 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.485337 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.485354 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.485368 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:28Z","lastTransitionTime":"2026-02-16T13:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.515542 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.515725 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.515766 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.515781 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:28 crc kubenswrapper[4812]: E0216 13:32:28.515804 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:32:44.515762978 +0000 UTC m=+53.580093679 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.515813 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.515875 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:28Z","lastTransitionTime":"2026-02-16T13:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:28 crc kubenswrapper[4812]: E0216 13:32:28.531235 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4981c762-995f-430f-ab9d-bca26618d78a\\\",\\\"systemUUID\\\":\\\"a8093dd5-8447-4cfc-ac6f-47d191141ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:28Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.535851 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.535926 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.535939 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.535957 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.535997 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:28Z","lastTransitionTime":"2026-02-16T13:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:28 crc kubenswrapper[4812]: E0216 13:32:28.549125 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4981c762-995f-430f-ab9d-bca26618d78a\\\",\\\"systemUUID\\\":\\\"a8093dd5-8447-4cfc-ac6f-47d191141ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:28Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.553656 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.553716 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.553729 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.553748 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.553761 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:28Z","lastTransitionTime":"2026-02-16T13:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:28 crc kubenswrapper[4812]: E0216 13:32:28.566111 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4981c762-995f-430f-ab9d-bca26618d78a\\\",\\\"systemUUID\\\":\\\"a8093dd5-8447-4cfc-ac6f-47d191141ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:28Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.569657 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.569755 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.569768 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.569787 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.569798 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:28Z","lastTransitionTime":"2026-02-16T13:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:28 crc kubenswrapper[4812]: E0216 13:32:28.581293 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4981c762-995f-430f-ab9d-bca26618d78a\\\",\\\"systemUUID\\\":\\\"a8093dd5-8447-4cfc-ac6f-47d191141ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:28Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.584810 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.584847 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.584857 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.584872 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.584882 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:28Z","lastTransitionTime":"2026-02-16T13:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:28 crc kubenswrapper[4812]: E0216 13:32:28.596542 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4981c762-995f-430f-ab9d-bca26618d78a\\\",\\\"systemUUID\\\":\\\"a8093dd5-8447-4cfc-ac6f-47d191141ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:28Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:28 crc kubenswrapper[4812]: E0216 13:32:28.596663 4812 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.597978 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.598037 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.598051 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.598064 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.598072 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:28Z","lastTransitionTime":"2026-02-16T13:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.616757 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.616899 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d2a1f0c6-cafa-4c67-a2ad-d6003e88613c-metrics-certs\") pod \"network-metrics-daemon-szt79\" (UID: \"d2a1f0c6-cafa-4c67-a2ad-d6003e88613c\") " pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:32:28 crc kubenswrapper[4812]: E0216 13:32:28.616967 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 13:32:28 crc kubenswrapper[4812]: E0216 13:32:28.616997 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 13:32:28 crc kubenswrapper[4812]: E0216 13:32:28.617012 4812 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 13:32:28 crc kubenswrapper[4812]: E0216 13:32:28.617064 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 13:32:44.61704699 +0000 UTC m=+53.681377691 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.616971 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:32:28 crc kubenswrapper[4812]: E0216 13:32:28.617088 4812 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.617199 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:32:28 crc kubenswrapper[4812]: E0216 13:32:28.617283 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 13:32:44.617261656 +0000 UTC m=+53.681592357 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 13:32:28 crc kubenswrapper[4812]: E0216 13:32:28.617175 4812 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.617320 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:32:28 crc kubenswrapper[4812]: E0216 13:32:28.617326 4812 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 13:32:28 crc kubenswrapper[4812]: E0216 13:32:28.617373 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2a1f0c6-cafa-4c67-a2ad-d6003e88613c-metrics-certs podName:d2a1f0c6-cafa-4c67-a2ad-d6003e88613c nodeName:}" failed. No retries permitted until 2026-02-16 13:32:29.617353699 +0000 UTC m=+38.681684440 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d2a1f0c6-cafa-4c67-a2ad-d6003e88613c-metrics-certs") pod "network-metrics-daemon-szt79" (UID: "d2a1f0c6-cafa-4c67-a2ad-d6003e88613c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 13:32:28 crc kubenswrapper[4812]: E0216 13:32:28.617396 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 13:32:44.61738367 +0000 UTC m=+53.681714421 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 13:32:28 crc kubenswrapper[4812]: E0216 13:32:28.617408 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 13:32:28 crc kubenswrapper[4812]: E0216 13:32:28.617432 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 13:32:28 crc kubenswrapper[4812]: E0216 13:32:28.617469 4812 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 13:32:28 crc kubenswrapper[4812]: E0216 13:32:28.617540 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 13:32:44.617515373 +0000 UTC m=+53.681846144 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.700260 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.700330 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.700345 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.700362 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.700373 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:28Z","lastTransitionTime":"2026-02-16T13:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.802543 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.802626 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.802638 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.802655 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.802667 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:28Z","lastTransitionTime":"2026-02-16T13:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.864905 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 11:58:25.985583775 +0000 UTC Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.878288 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.878325 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:32:28 crc kubenswrapper[4812]: E0216 13:32:28.878415 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:32:28 crc kubenswrapper[4812]: E0216 13:32:28.878596 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.905310 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.905359 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.905372 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.905388 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:28 crc kubenswrapper[4812]: I0216 13:32:28.905400 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:28Z","lastTransitionTime":"2026-02-16T13:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.010529 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.010589 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.010600 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.010616 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.010632 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:29Z","lastTransitionTime":"2026-02-16T13:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.112741 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.112790 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.112801 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.112818 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.112830 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:29Z","lastTransitionTime":"2026-02-16T13:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.215176 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.215377 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.215436 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.215645 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.215773 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:29Z","lastTransitionTime":"2026-02-16T13:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.318410 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.318478 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.318491 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.318510 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.318523 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:29Z","lastTransitionTime":"2026-02-16T13:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.420325 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.420357 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.420366 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.420379 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.420389 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:29Z","lastTransitionTime":"2026-02-16T13:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.522272 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.522297 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.522305 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.522318 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.522327 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:29Z","lastTransitionTime":"2026-02-16T13:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.624358 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.624400 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.624411 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.624427 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.624438 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:29Z","lastTransitionTime":"2026-02-16T13:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.626320 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d2a1f0c6-cafa-4c67-a2ad-d6003e88613c-metrics-certs\") pod \"network-metrics-daemon-szt79\" (UID: \"d2a1f0c6-cafa-4c67-a2ad-d6003e88613c\") " pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:32:29 crc kubenswrapper[4812]: E0216 13:32:29.626543 4812 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 13:32:29 crc kubenswrapper[4812]: E0216 13:32:29.626634 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2a1f0c6-cafa-4c67-a2ad-d6003e88613c-metrics-certs podName:d2a1f0c6-cafa-4c67-a2ad-d6003e88613c nodeName:}" failed. No retries permitted until 2026-02-16 13:32:31.626609561 +0000 UTC m=+40.690940262 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d2a1f0c6-cafa-4c67-a2ad-d6003e88613c-metrics-certs") pod "network-metrics-daemon-szt79" (UID: "d2a1f0c6-cafa-4c67-a2ad-d6003e88613c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.726835 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.726874 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.726886 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.726902 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.726914 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:29Z","lastTransitionTime":"2026-02-16T13:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.828983 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.829035 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.829051 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.829072 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.829085 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:29Z","lastTransitionTime":"2026-02-16T13:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.865880 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 15:15:28.0170411 +0000 UTC Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.878251 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:32:29 crc kubenswrapper[4812]: E0216 13:32:29.878384 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.878492 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:32:29 crc kubenswrapper[4812]: E0216 13:32:29.878635 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.932066 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.932121 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.932130 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.932154 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:29 crc kubenswrapper[4812]: I0216 13:32:29.932164 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:29Z","lastTransitionTime":"2026-02-16T13:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:30 crc kubenswrapper[4812]: I0216 13:32:30.228143 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:30 crc kubenswrapper[4812]: I0216 13:32:30.228181 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:30 crc kubenswrapper[4812]: I0216 13:32:30.228191 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:30 crc kubenswrapper[4812]: I0216 13:32:30.228208 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:30 crc kubenswrapper[4812]: I0216 13:32:30.228221 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:30Z","lastTransitionTime":"2026-02-16T13:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:30 crc kubenswrapper[4812]: I0216 13:32:30.330857 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:30 crc kubenswrapper[4812]: I0216 13:32:30.330919 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:30 crc kubenswrapper[4812]: I0216 13:32:30.330933 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:30 crc kubenswrapper[4812]: I0216 13:32:30.330949 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:30 crc kubenswrapper[4812]: I0216 13:32:30.330961 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:30Z","lastTransitionTime":"2026-02-16T13:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:30 crc kubenswrapper[4812]: I0216 13:32:30.433544 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:30 crc kubenswrapper[4812]: I0216 13:32:30.433584 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:30 crc kubenswrapper[4812]: I0216 13:32:30.433595 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:30 crc kubenswrapper[4812]: I0216 13:32:30.433614 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:30 crc kubenswrapper[4812]: I0216 13:32:30.433623 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:30Z","lastTransitionTime":"2026-02-16T13:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:30 crc kubenswrapper[4812]: I0216 13:32:30.536206 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:30 crc kubenswrapper[4812]: I0216 13:32:30.536273 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:30 crc kubenswrapper[4812]: I0216 13:32:30.536282 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:30 crc kubenswrapper[4812]: I0216 13:32:30.536296 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:30 crc kubenswrapper[4812]: I0216 13:32:30.536324 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:30Z","lastTransitionTime":"2026-02-16T13:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:30 crc kubenswrapper[4812]: I0216 13:32:30.639019 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:30 crc kubenswrapper[4812]: I0216 13:32:30.639063 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:30 crc kubenswrapper[4812]: I0216 13:32:30.639073 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:30 crc kubenswrapper[4812]: I0216 13:32:30.639087 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:30 crc kubenswrapper[4812]: I0216 13:32:30.639096 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:30Z","lastTransitionTime":"2026-02-16T13:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:30 crc kubenswrapper[4812]: I0216 13:32:30.741177 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:30 crc kubenswrapper[4812]: I0216 13:32:30.741212 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:30 crc kubenswrapper[4812]: I0216 13:32:30.741221 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:30 crc kubenswrapper[4812]: I0216 13:32:30.741234 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:30 crc kubenswrapper[4812]: I0216 13:32:30.741244 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:30Z","lastTransitionTime":"2026-02-16T13:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:30 crc kubenswrapper[4812]: I0216 13:32:30.844501 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:30 crc kubenswrapper[4812]: I0216 13:32:30.844566 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:30 crc kubenswrapper[4812]: I0216 13:32:30.844580 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:30 crc kubenswrapper[4812]: I0216 13:32:30.844599 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:30 crc kubenswrapper[4812]: I0216 13:32:30.844610 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:30Z","lastTransitionTime":"2026-02-16T13:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:30 crc kubenswrapper[4812]: I0216 13:32:30.866973 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 05:53:24.70358386 +0000 UTC Feb 16 13:32:30 crc kubenswrapper[4812]: I0216 13:32:30.878494 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:32:30 crc kubenswrapper[4812]: E0216 13:32:30.878623 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:32:30 crc kubenswrapper[4812]: I0216 13:32:30.878704 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:32:30 crc kubenswrapper[4812]: E0216 13:32:30.878989 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:32:30 crc kubenswrapper[4812]: I0216 13:32:30.947118 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:30 crc kubenswrapper[4812]: I0216 13:32:30.947163 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:30 crc kubenswrapper[4812]: I0216 13:32:30.947173 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:30 crc kubenswrapper[4812]: I0216 13:32:30.947186 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:30 crc kubenswrapper[4812]: I0216 13:32:30.947195 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:30Z","lastTransitionTime":"2026-02-16T13:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.049880 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.049929 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.049942 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.049962 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.049974 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:31Z","lastTransitionTime":"2026-02-16T13:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.152676 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.152704 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.152747 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.152761 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.152771 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:31Z","lastTransitionTime":"2026-02-16T13:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.255160 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.255224 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.255237 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.255253 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.255265 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:31Z","lastTransitionTime":"2026-02-16T13:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.358090 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.358167 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.358190 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.358221 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.358242 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:31Z","lastTransitionTime":"2026-02-16T13:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.460335 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.460385 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.460398 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.460415 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.460427 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:31Z","lastTransitionTime":"2026-02-16T13:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.563276 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.563327 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.563337 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.563355 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.563365 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:31Z","lastTransitionTime":"2026-02-16T13:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.643671 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d2a1f0c6-cafa-4c67-a2ad-d6003e88613c-metrics-certs\") pod \"network-metrics-daemon-szt79\" (UID: \"d2a1f0c6-cafa-4c67-a2ad-d6003e88613c\") " pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:32:31 crc kubenswrapper[4812]: E0216 13:32:31.643878 4812 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 13:32:31 crc kubenswrapper[4812]: E0216 13:32:31.644004 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2a1f0c6-cafa-4c67-a2ad-d6003e88613c-metrics-certs podName:d2a1f0c6-cafa-4c67-a2ad-d6003e88613c nodeName:}" failed. No retries permitted until 2026-02-16 13:32:35.643972293 +0000 UTC m=+44.708303034 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d2a1f0c6-cafa-4c67-a2ad-d6003e88613c-metrics-certs") pod "network-metrics-daemon-szt79" (UID: "d2a1f0c6-cafa-4c67-a2ad-d6003e88613c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.665914 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.665941 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.665950 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.665962 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.665973 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:31Z","lastTransitionTime":"2026-02-16T13:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.768743 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.768804 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.768821 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.768846 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.768864 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:31Z","lastTransitionTime":"2026-02-16T13:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.868091 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 11:11:58.912160013 +0000 UTC Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.870955 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.870985 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.870996 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.871011 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.871022 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:31Z","lastTransitionTime":"2026-02-16T13:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.878511 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.878522 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:32:31 crc kubenswrapper[4812]: E0216 13:32:31.878642 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:32:31 crc kubenswrapper[4812]: E0216 13:32:31.878739 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.890401 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f43b73-bad3-4883-ad0b-4a8df6824248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1001938f74f193628f1faf5fada6a26836e7053b9f46656912df4c17ac11a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5304f578aa4297d0e9e57669cb571d178fada6972553487f8f3e6a6a2422175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b778bd4c0e3d5d5158269f30b9646da76918d07d5a79e274b0b70ad1b4f38623\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94212f31a24fe0258d7c5241438e6ec1daf60b49bb87365fa851ae0a5628de1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:31Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.901016 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc49bb6ecf7f6197ff2e3e76e7e51bb0842fec86477b85c62d1930170ac91d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:31Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.919904 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"145eec20-9328-4b99-b0ec-4870b6761385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6366360c4a0ce6a9882aa517e034c45ef65a6ae8bf19f797f3ddaa99b8a76469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8g94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:31Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.930398 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gt4zb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e823c28d-cc96-469c-a794-fb12a7ae6172\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d34a9809d2ade6a3b1cb69520313bd3886db5c666762ca9ce993b5251331fa7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms9qd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f8618e97cfc7a7d6673243425c62cfd9633fca65005e96b63e924c303f7e5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms9qd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gt4zb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:31Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.940085 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:31Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.950410 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5w4kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f07f8fe-99f2-4f2e-b9f8-56841d756064\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6394bf9fe0ec8408dd8bfb0105dcc686f3a820850e75ce47db441f04d22c1c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5w4kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:31Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.964520 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p9b2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a86c16e138e436ef3976c554ab3d70e39a7515f662e305558513a633db53142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2tjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p9b2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:31Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.973608 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.973646 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.973655 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.973671 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.973683 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:31Z","lastTransitionTime":"2026-02-16T13:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.989701 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67ca714-af04-4a76-8a28-54d47f66b1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d3dc0a8dee9acb7c22340a3bf0d3b957f66b87aafd9d79d65ad740e1b3e73f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d3dc0a8dee9acb7c22340a3bf0d3b957f66b87aafd9d79d65ad740e1b3e73f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T13:32:25Z\\\",\\\"message\\\":\\\".176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 13:32:25.052962 6255 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-5w4kf\\\\nI0216 13:32:25.052979 6255 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0216 13:32:25.052998 6255 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nF0216 13:32:25.053003 6255 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:25Z is\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pzksg_openshift-ovn-kubernetes(a67ca714-af04-4a76-8a28-54d47f66b1fa)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzksg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:31Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:31 crc kubenswrapper[4812]: I0216 13:32:31.999625 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-szt79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a1f0c6-cafa-4c67-a2ad-d6003e88613c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md4ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md4ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-szt79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:31Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.011705 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b29081-a34f-4671-85a3-e1bc2b16d37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f92165c7178abd9e054861a9eb371353d050b922ba46c2a29027462832abff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 13:32:11.932704 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 13:32:11.933194 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 13:32:11.934094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1719573121/tls.crt::/tmp/serving-cert-1719573121/tls.key\\\\\\\"\\\\nI0216 13:32:12.377062 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 13:32:12.379783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 13:32:12.379801 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 13:32:12.379819 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 13:32:12.379825 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 13:32:12.383118 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 13:32:12.383163 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 13:32:12.383168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 13:32:12.383182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 13:32:12.383185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 13:32:12.383188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 13:32:12.384888 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:32Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.022754 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78ca0949b586a8839b02c0e01bd92571f3c4eaa156cfcd58546607c30cdbdcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:32Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.033915 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://367d830f42471f794953f9aa4cb979d9c3dac3ef0f2364661058c50cc1b0d055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f6091f13d04f859b29636375d3a27e07eeaee338f0da4a70b471f7f26250e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:32Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.045078 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:32Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.056971 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:32Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.068008 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2hhp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"934e533e-cc26-4770-af67-3dbcaa0dab5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a9a856d7be88b316dbac827e3998da5df55019101f11f2e6e7d763a78c4a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2xc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2hhp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:32Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.075869 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.075904 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.075913 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.075927 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.075938 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:32Z","lastTransitionTime":"2026-02-16T13:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.078735 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c55e49a-a30d-4950-a690-c33d9f8a31e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff320daab8e42aacfbe2cb2ef91f064cfd950840978ca9aaf3e11ea987dc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe6225f5f96d52b832cfbd8ecb30748dfcd053b9d95a512bfbe0c6c4aa7c0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6mn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:32Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.178536 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.178569 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.178578 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.178592 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.178601 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:32Z","lastTransitionTime":"2026-02-16T13:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.280726 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.280774 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.280785 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.280802 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.280814 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:32Z","lastTransitionTime":"2026-02-16T13:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.383582 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.383644 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.383661 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.383686 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.383703 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:32Z","lastTransitionTime":"2026-02-16T13:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.485780 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.485824 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.485835 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.485849 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.485859 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:32Z","lastTransitionTime":"2026-02-16T13:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.588083 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.588128 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.588138 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.588154 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.588165 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:32Z","lastTransitionTime":"2026-02-16T13:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.689853 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.689907 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.689918 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.689933 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.689945 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:32Z","lastTransitionTime":"2026-02-16T13:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.792481 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.792520 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.792530 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.792548 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.792559 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:32Z","lastTransitionTime":"2026-02-16T13:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.868538 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 02:20:22.771435051 +0000 UTC Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.878849 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.878895 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:32:32 crc kubenswrapper[4812]: E0216 13:32:32.878978 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:32:32 crc kubenswrapper[4812]: E0216 13:32:32.879049 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.895142 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.895202 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.895211 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.895225 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.895236 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:32Z","lastTransitionTime":"2026-02-16T13:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.997941 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.997987 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.997999 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.998015 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:32 crc kubenswrapper[4812]: I0216 13:32:32.998030 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:32Z","lastTransitionTime":"2026-02-16T13:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.100586 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.100636 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.100652 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.100676 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.100696 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:33Z","lastTransitionTime":"2026-02-16T13:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.203412 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.203472 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.203483 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.203499 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.203510 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:33Z","lastTransitionTime":"2026-02-16T13:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.306051 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.306106 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.306119 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.306132 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.306141 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:33Z","lastTransitionTime":"2026-02-16T13:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.409057 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.409109 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.409118 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.409143 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.409155 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:33Z","lastTransitionTime":"2026-02-16T13:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.464849 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.477891 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:33Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.487630 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5w4kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f07f8fe-99f2-4f2e-b9f8-56841d756064\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6394bf9fe0ec8408dd8bfb0105dcc686f3a820850e75ce47db441f04d22c1c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5w4kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:33Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.496578 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-szt79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a1f0c6-cafa-4c67-a2ad-d6003e88613c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md4ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md4ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-szt79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:33Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.508820 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b29081-a34f-4671-85a3-e1bc2b16d37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f92165c7178abd9e054861a9eb371353d050b922ba46c2a29027462832abff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 13:32:11.932704 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 13:32:11.933194 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 13:32:11.934094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1719573121/tls.crt::/tmp/serving-cert-1719573121/tls.key\\\\\\\"\\\\nI0216 13:32:12.377062 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 13:32:12.379783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 13:32:12.379801 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 13:32:12.379819 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 13:32:12.379825 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 13:32:12.383118 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 13:32:12.383163 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 13:32:12.383168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 13:32:12.383182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 13:32:12.383185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 13:32:12.383188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 13:32:12.384888 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:33Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.511896 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.511936 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.511951 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.511971 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.511985 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:33Z","lastTransitionTime":"2026-02-16T13:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.518941 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78ca0949b586a8839b02c0e01bd92571f3c4eaa156cfcd58546607c30cdbdcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:33Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.532707 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://367d830f42471f794953f9aa4cb979d9c3dac3ef0f2364661058c50cc1b0d055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f6091f13d04f859b29636375d3a27e07eeaee338f0da4a70b471f7f26250e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:33Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.543034 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:33Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.550952 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p9b2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a86c16e138e436ef3976c554ab3d70e39a7515f662e305558513a633db53142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2tjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p9b2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:33Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.572324 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67ca714-af04-4a76-8a28-54d47f66b1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d3dc0a8dee9acb7c22340a3bf0d3b957f66b87aafd9d79d65ad740e1b3e73f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d3dc0a8dee9acb7c22340a3bf0d3b957f66b87aafd9d79d65ad740e1b3e73f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T13:32:25Z\\\",\\\"message\\\":\\\".176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 13:32:25.052962 6255 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-5w4kf\\\\nI0216 13:32:25.052979 6255 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0216 13:32:25.052998 6255 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nF0216 13:32:25.053003 6255 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:25Z is\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pzksg_openshift-ovn-kubernetes(a67ca714-af04-4a76-8a28-54d47f66b1fa)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzksg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:33Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.586027 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:33Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.599336 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2hhp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"934e533e-cc26-4770-af67-3dbcaa0dab5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a9a856d7be88b316dbac827e3998da5df55019101f11f2e6e7d763a78c4a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2xc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2hhp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:33Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.612157 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c55e49a-a30d-4950-a690-c33d9f8a31e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff320daab8e42aacfbe2cb2ef91f064cfd950840978ca9aaf3e11ea987dc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe6225f5f96d52b832cfbd8ecb30748dfcd053b9d95a512bfbe0c6c4aa7c0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6mn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:33Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.614412 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.614568 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.614638 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.614768 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.614849 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:33Z","lastTransitionTime":"2026-02-16T13:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.624873 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f43b73-bad3-4883-ad0b-4a8df6824248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1001938f74f193628f1faf5fada6a26836e7053b9f46656912df4c17ac11a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5304f578aa4297d0e9e57669cb571d178fada6972553487f8f3e6a6a2422175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b778bd4c0e3d5d5158269f30b9646da76918d07d5a79e274b0b70ad1b4f38623\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94212f31a24fe0258d7c5241438e6ec1daf60b49bb87365fa851ae0a5628de1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:33Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.637087 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc49bb6ecf7f6197ff2e3e76e7e51bb0842fec86477b85c62d1930170ac91d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:33Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.649641 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"145eec20-9328-4b99-b0ec-4870b6761385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6366360c4a0ce6a9882aa517e034c45ef65a6ae8bf19f797f3ddaa99b8a76469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8g94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:33Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.663617 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gt4zb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e823c28d-cc96-469c-a794-fb12a7ae6172\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d34a9809d2ade6a3b1cb69520313bd3886db5c666762ca9ce993b5251331fa7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms9qd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f8618e97cfc7a7d6673243425c62cfd9633fca65005e96b63e924c303f7e5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms9qd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gt4zb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:33Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.716875 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.716949 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.716960 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.716974 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.716983 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:33Z","lastTransitionTime":"2026-02-16T13:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.819795 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.819873 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.819897 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.819918 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.819932 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:33Z","lastTransitionTime":"2026-02-16T13:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.869793 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 16:35:36.604115157 +0000 UTC Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.878201 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:32:33 crc kubenswrapper[4812]: E0216 13:32:33.878374 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.878635 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:32:33 crc kubenswrapper[4812]: E0216 13:32:33.878890 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.922587 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.922631 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.922653 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.922672 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:33 crc kubenswrapper[4812]: I0216 13:32:33.922763 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:33Z","lastTransitionTime":"2026-02-16T13:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.025530 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.025579 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.025591 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.025610 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.025622 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:34Z","lastTransitionTime":"2026-02-16T13:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.128638 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.128675 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.128685 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.128701 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.128713 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:34Z","lastTransitionTime":"2026-02-16T13:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.230840 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.231071 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.231145 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.231257 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.231328 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:34Z","lastTransitionTime":"2026-02-16T13:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.333835 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.333871 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.333880 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.333895 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.333904 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:34Z","lastTransitionTime":"2026-02-16T13:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.436769 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.436832 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.436856 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.436883 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.436905 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:34Z","lastTransitionTime":"2026-02-16T13:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.539040 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.539079 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.539089 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.539107 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.539117 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:34Z","lastTransitionTime":"2026-02-16T13:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.641050 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.641101 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.641112 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.641129 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.641141 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:34Z","lastTransitionTime":"2026-02-16T13:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.744622 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.744710 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.744752 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.744787 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.744810 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:34Z","lastTransitionTime":"2026-02-16T13:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.847802 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.847844 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.847856 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.847875 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.847887 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:34Z","lastTransitionTime":"2026-02-16T13:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.870385 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 21:05:14.593092115 +0000 UTC Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.878005 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.878123 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:32:34 crc kubenswrapper[4812]: E0216 13:32:34.878212 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:32:34 crc kubenswrapper[4812]: E0216 13:32:34.878276 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.950095 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.950138 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.950146 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.950159 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:34 crc kubenswrapper[4812]: I0216 13:32:34.950167 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:34Z","lastTransitionTime":"2026-02-16T13:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.052279 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.052324 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.052336 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.052352 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.052362 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:35Z","lastTransitionTime":"2026-02-16T13:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.154719 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.154760 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.154768 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.154782 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.154814 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:35Z","lastTransitionTime":"2026-02-16T13:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.256692 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.257010 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.257021 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.257035 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.257044 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:35Z","lastTransitionTime":"2026-02-16T13:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.359300 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.359352 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.359363 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.359380 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.359391 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:35Z","lastTransitionTime":"2026-02-16T13:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.462409 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.462481 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.462495 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.462511 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.462522 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:35Z","lastTransitionTime":"2026-02-16T13:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.565171 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.565216 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.565227 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.565243 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.565253 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:35Z","lastTransitionTime":"2026-02-16T13:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.667468 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.667503 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.667512 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.667526 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.667534 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:35Z","lastTransitionTime":"2026-02-16T13:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.683315 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d2a1f0c6-cafa-4c67-a2ad-d6003e88613c-metrics-certs\") pod \"network-metrics-daemon-szt79\" (UID: \"d2a1f0c6-cafa-4c67-a2ad-d6003e88613c\") " pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:32:35 crc kubenswrapper[4812]: E0216 13:32:35.683423 4812 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 13:32:35 crc kubenswrapper[4812]: E0216 13:32:35.683496 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2a1f0c6-cafa-4c67-a2ad-d6003e88613c-metrics-certs podName:d2a1f0c6-cafa-4c67-a2ad-d6003e88613c nodeName:}" failed. No retries permitted until 2026-02-16 13:32:43.683481814 +0000 UTC m=+52.747812515 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d2a1f0c6-cafa-4c67-a2ad-d6003e88613c-metrics-certs") pod "network-metrics-daemon-szt79" (UID: "d2a1f0c6-cafa-4c67-a2ad-d6003e88613c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.769972 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.770038 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.770060 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.770088 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.770110 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:35Z","lastTransitionTime":"2026-02-16T13:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.871243 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 04:06:48.466440418 +0000 UTC Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.873930 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.873966 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.873985 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.874002 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.874013 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:35Z","lastTransitionTime":"2026-02-16T13:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.878417 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:32:35 crc kubenswrapper[4812]: E0216 13:32:35.878547 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.878623 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:32:35 crc kubenswrapper[4812]: E0216 13:32:35.878765 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.976642 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.976692 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.976700 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.976713 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:35 crc kubenswrapper[4812]: I0216 13:32:35.976723 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:35Z","lastTransitionTime":"2026-02-16T13:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.079162 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.079207 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.079222 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.079241 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.079251 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:36Z","lastTransitionTime":"2026-02-16T13:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.182102 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.182181 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.182195 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.182212 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.182225 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:36Z","lastTransitionTime":"2026-02-16T13:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.284564 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.284605 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.284614 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.284628 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.284638 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:36Z","lastTransitionTime":"2026-02-16T13:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.387010 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.387058 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.387069 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.387086 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.387097 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:36Z","lastTransitionTime":"2026-02-16T13:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.488955 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.488993 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.489002 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.489015 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.489023 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:36Z","lastTransitionTime":"2026-02-16T13:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.591340 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.591426 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.591491 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.591517 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.591530 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:36Z","lastTransitionTime":"2026-02-16T13:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.693716 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.693765 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.693777 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.693796 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.693807 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:36Z","lastTransitionTime":"2026-02-16T13:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.796193 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.796233 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.796245 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.796259 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.796268 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:36Z","lastTransitionTime":"2026-02-16T13:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.871612 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 14:10:59.001246598 +0000 UTC Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.878995 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.879009 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:32:36 crc kubenswrapper[4812]: E0216 13:32:36.879142 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:32:36 crc kubenswrapper[4812]: E0216 13:32:36.879304 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.899183 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.899228 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.899237 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.899251 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:36 crc kubenswrapper[4812]: I0216 13:32:36.899261 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:36Z","lastTransitionTime":"2026-02-16T13:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.001211 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.001244 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.001255 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.001294 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.001305 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:37Z","lastTransitionTime":"2026-02-16T13:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.103986 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.104034 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.104043 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.104056 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.104065 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:37Z","lastTransitionTime":"2026-02-16T13:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.206184 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.206241 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.206254 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.206274 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.206289 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:37Z","lastTransitionTime":"2026-02-16T13:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.308491 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.308534 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.308546 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.308564 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.308577 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:37Z","lastTransitionTime":"2026-02-16T13:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.410811 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.410844 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.410852 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.410865 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.410874 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:37Z","lastTransitionTime":"2026-02-16T13:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.513340 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.513373 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.513382 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.513395 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.513405 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:37Z","lastTransitionTime":"2026-02-16T13:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.621299 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.621387 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.621409 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.621435 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.621491 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:37Z","lastTransitionTime":"2026-02-16T13:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.724509 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.724561 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.724569 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.724583 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.724595 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:37Z","lastTransitionTime":"2026-02-16T13:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.827140 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.827180 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.827188 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.827201 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.827210 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:37Z","lastTransitionTime":"2026-02-16T13:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.872163 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 04:43:01.937602548 +0000 UTC Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.878658 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:32:37 crc kubenswrapper[4812]: E0216 13:32:37.878790 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.878658 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:32:37 crc kubenswrapper[4812]: E0216 13:32:37.878866 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.929726 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.929758 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.929768 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.929783 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:37 crc kubenswrapper[4812]: I0216 13:32:37.929794 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:37Z","lastTransitionTime":"2026-02-16T13:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.032219 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.032252 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.032261 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.032273 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.032286 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:38Z","lastTransitionTime":"2026-02-16T13:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.135475 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.135528 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.135539 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.135556 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.135569 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:38Z","lastTransitionTime":"2026-02-16T13:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.238238 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.238279 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.238290 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.238305 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.238313 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:38Z","lastTransitionTime":"2026-02-16T13:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.341135 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.341172 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.341181 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.341194 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.341203 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:38Z","lastTransitionTime":"2026-02-16T13:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.443201 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.443246 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.443258 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.443277 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.443290 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:38Z","lastTransitionTime":"2026-02-16T13:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.545032 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.545070 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.545077 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.545091 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.545101 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:38Z","lastTransitionTime":"2026-02-16T13:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.638811 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.638849 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.638860 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.638876 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.638887 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:38Z","lastTransitionTime":"2026-02-16T13:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:38 crc kubenswrapper[4812]: E0216 13:32:38.655083 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4981c762-995f-430f-ab9d-bca26618d78a\\\",\\\"systemUUID\\\":\\\"a8093dd5-8447-4cfc-ac6f-47d191141ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:38Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.659483 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.659531 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.659542 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.659559 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.659569 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:38Z","lastTransitionTime":"2026-02-16T13:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:38 crc kubenswrapper[4812]: E0216 13:32:38.672887 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4981c762-995f-430f-ab9d-bca26618d78a\\\",\\\"systemUUID\\\":\\\"a8093dd5-8447-4cfc-ac6f-47d191141ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:38Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.676875 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.676918 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.676927 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.676944 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.676958 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:38Z","lastTransitionTime":"2026-02-16T13:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:38 crc kubenswrapper[4812]: E0216 13:32:38.690684 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4981c762-995f-430f-ab9d-bca26618d78a\\\",\\\"systemUUID\\\":\\\"a8093dd5-8447-4cfc-ac6f-47d191141ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:38Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.694542 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.694583 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.694594 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.694611 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.694622 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:38Z","lastTransitionTime":"2026-02-16T13:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:38 crc kubenswrapper[4812]: E0216 13:32:38.705907 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4981c762-995f-430f-ab9d-bca26618d78a\\\",\\\"systemUUID\\\":\\\"a8093dd5-8447-4cfc-ac6f-47d191141ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:38Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.708903 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.708933 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.708943 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.708957 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.708966 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:38Z","lastTransitionTime":"2026-02-16T13:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:38 crc kubenswrapper[4812]: E0216 13:32:38.719792 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4981c762-995f-430f-ab9d-bca26618d78a\\\",\\\"systemUUID\\\":\\\"a8093dd5-8447-4cfc-ac6f-47d191141ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:38Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:38 crc kubenswrapper[4812]: E0216 13:32:38.719970 4812 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.721764 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.721819 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.721829 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.721846 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.721872 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:38Z","lastTransitionTime":"2026-02-16T13:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.824608 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.824657 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.824670 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.824693 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.824708 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:38Z","lastTransitionTime":"2026-02-16T13:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.873338 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 10:57:43.966386496 +0000 UTC Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.878660 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.878705 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:32:38 crc kubenswrapper[4812]: E0216 13:32:38.878808 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:32:38 crc kubenswrapper[4812]: E0216 13:32:38.879034 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.927340 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.927411 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.927427 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.927479 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:38 crc kubenswrapper[4812]: I0216 13:32:38.927498 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:38Z","lastTransitionTime":"2026-02-16T13:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.029629 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.029675 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.029684 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.029699 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.029711 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:39Z","lastTransitionTime":"2026-02-16T13:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.132190 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.132230 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.132240 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.132254 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.132264 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:39Z","lastTransitionTime":"2026-02-16T13:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.233936 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.233981 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.233995 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.234013 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.234025 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:39Z","lastTransitionTime":"2026-02-16T13:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.336960 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.337031 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.337074 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.337112 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.337138 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:39Z","lastTransitionTime":"2026-02-16T13:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.440143 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.440225 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.440265 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.440294 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.440315 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:39Z","lastTransitionTime":"2026-02-16T13:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.542088 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.542161 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.542184 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.542213 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.542236 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:39Z","lastTransitionTime":"2026-02-16T13:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.644582 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.644646 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.644657 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.644672 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.644682 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:39Z","lastTransitionTime":"2026-02-16T13:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.747183 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.747256 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.747282 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.747314 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.747339 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:39Z","lastTransitionTime":"2026-02-16T13:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.849821 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.849854 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.849864 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.849880 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.849888 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:39Z","lastTransitionTime":"2026-02-16T13:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.873997 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 08:28:21.538326526 +0000 UTC Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.878467 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.878521 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:32:39 crc kubenswrapper[4812]: E0216 13:32:39.878584 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:32:39 crc kubenswrapper[4812]: E0216 13:32:39.879248 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.879883 4812 scope.go:117] "RemoveContainer" containerID="85d3dc0a8dee9acb7c22340a3bf0d3b957f66b87aafd9d79d65ad740e1b3e73f" Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.952437 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.952515 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.952525 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.952559 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:39 crc kubenswrapper[4812]: I0216 13:32:39.952569 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:39Z","lastTransitionTime":"2026-02-16T13:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.054508 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.054657 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.054684 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.054723 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.054744 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:40Z","lastTransitionTime":"2026-02-16T13:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.157209 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.157250 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.157260 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.157275 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.157283 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:40Z","lastTransitionTime":"2026-02-16T13:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.260180 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.260219 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.260230 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.260245 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.260256 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:40Z","lastTransitionTime":"2026-02-16T13:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.261140 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pzksg_a67ca714-af04-4a76-8a28-54d47f66b1fa/ovnkube-controller/1.log" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.263897 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" event={"ID":"a67ca714-af04-4a76-8a28-54d47f66b1fa","Type":"ContainerStarted","Data":"c46752e9b7b26727ab1a278c44c87ac6fc14d7f5fda7f3332d47815dbae3ec19"} Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.264574 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.275435 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c55e49a-a30d-4950-a690-c33d9f8a31e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff320daab8e42aacfbe2cb2ef91f064cfd950840978ca9aaf3e11ea987dc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe6225f5f96d52b832cfbd8ecb30748dfcd053b9d95a512bfbe0c6c4aa7c0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6mn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:40Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.287272 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:40Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.301486 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2hhp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"934e533e-cc26-4770-af67-3dbcaa0dab5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a9a856d7be88b316dbac827e3998da5df55019101f11f2e6e7d763a78c4a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2xc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2hhp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:40Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.315268 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc49bb6ecf7f6197ff2e3e76e7e51bb0842fec86477b85c62d1930170ac91d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:40Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.328254 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"145eec20-9328-4b99-b0ec-4870b6761385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6366360c4a0ce6a9882aa517e034c45ef65a6ae8bf19f797f3ddaa99b8a76469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8g94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:40Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.340750 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gt4zb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e823c28d-cc96-469c-a794-fb12a7ae6172\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d34a9809d2ade6a3b1cb69520313bd3886db5c666762ca9ce993b5251331fa7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms9qd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f8618e97cfc7a7d6673243425c62cfd9633fca65005e96b63e924c303f7e5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms9qd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gt4zb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:40Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.352267 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f43b73-bad3-4883-ad0b-4a8df6824248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1001938f74f193628f1faf5fada6a26836e7053b9f46656912df4c17ac11a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5304f578aa4297d0e9e57669cb571d178fada6972553487f8f3e6a6a2422175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b778bd4c0e3d5d5158269f30b9646da76918d07d5a79e274b0b70ad1b4f38623\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94212f31a24fe0258d7c5241438e6ec1daf60b49bb87365fa851ae0a5628de1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:40Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.362840 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.362881 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.362924 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.362942 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.362991 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:40Z","lastTransitionTime":"2026-02-16T13:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.363502 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5w4kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f07f8fe-99f2-4f2e-b9f8-56841d756064\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6394bf9fe0ec8408dd8bfb0105dcc686f3a820850e75ce47db441f04d22c1c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5w4kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:40Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.374582 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:40Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.384533 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78ca0949b586a8839b02c0e01bd92571f3c4eaa156cfcd58546607c30cdbdcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:40Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.397595 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://367d830f42471f794953f9aa4cb979d9c3dac3ef0f2364661058c50cc1b0d055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f6091f13d04f859b29636375d3a27e07eeaee338f0da4a70b471f7f26250e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:40Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.410214 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:40Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.419480 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p9b2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a86c16e138e436ef3976c554ab3d70e39a7515f662e305558513a633db53142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2tjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p9b2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:40Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.439226 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67ca714-af04-4a76-8a28-54d47f66b1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c46752e9b7b26727ab1a278c44c87ac6fc14d7f5fda7f3332d47815dbae3ec19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d3dc0a8dee9acb7c22340a3bf0d3b957f66b87aafd9d79d65ad740e1b3e73f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T13:32:25Z\\\",\\\"message\\\":\\\".176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 13:32:25.052962 6255 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-5w4kf\\\\nI0216 13:32:25.052979 6255 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0216 13:32:25.052998 6255 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nF0216 13:32:25.053003 6255 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:25Z is\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzksg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:40Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.456277 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-szt79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a1f0c6-cafa-4c67-a2ad-d6003e88613c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md4ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md4ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-szt79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:40Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.465110 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.465157 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.465168 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.465183 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.465192 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:40Z","lastTransitionTime":"2026-02-16T13:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.469383 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b29081-a34f-4671-85a3-e1bc2b16d37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f92165c7178abd9e054861a9eb371353d050b922ba46c2a29027462832abff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 13:32:11.932704 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 13:32:11.933194 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 13:32:11.934094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1719573121/tls.crt::/tmp/serving-cert-1719573121/tls.key\\\\\\\"\\\\nI0216 13:32:12.377062 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 13:32:12.379783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 13:32:12.379801 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 13:32:12.379819 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 13:32:12.379825 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 13:32:12.383118 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 13:32:12.383163 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 13:32:12.383168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 13:32:12.383182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 13:32:12.383185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 13:32:12.383188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 13:32:12.384888 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:40Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.567324 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.567362 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.567372 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.567388 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.567398 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:40Z","lastTransitionTime":"2026-02-16T13:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.670023 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.670068 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.670082 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.670099 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.670113 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:40Z","lastTransitionTime":"2026-02-16T13:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.772824 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.772992 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.773012 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.773039 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.773057 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:40Z","lastTransitionTime":"2026-02-16T13:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.874138 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 23:42:40.701160057 +0000 UTC Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.875476 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.875519 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.875529 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.875542 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.875552 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:40Z","lastTransitionTime":"2026-02-16T13:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.878853 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.878982 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:32:40 crc kubenswrapper[4812]: E0216 13:32:40.879062 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:32:40 crc kubenswrapper[4812]: E0216 13:32:40.879214 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.978098 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.978139 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.978150 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.978168 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:40 crc kubenswrapper[4812]: I0216 13:32:40.978180 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:40Z","lastTransitionTime":"2026-02-16T13:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.080821 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.080879 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.080890 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.080907 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.080920 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:41Z","lastTransitionTime":"2026-02-16T13:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.184016 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.184078 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.184096 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.184120 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.184140 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:41Z","lastTransitionTime":"2026-02-16T13:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.270675 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pzksg_a67ca714-af04-4a76-8a28-54d47f66b1fa/ovnkube-controller/2.log" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.271982 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pzksg_a67ca714-af04-4a76-8a28-54d47f66b1fa/ovnkube-controller/1.log" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.276708 4812 generic.go:334] "Generic (PLEG): container finished" podID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerID="c46752e9b7b26727ab1a278c44c87ac6fc14d7f5fda7f3332d47815dbae3ec19" exitCode=1 Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.276754 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" event={"ID":"a67ca714-af04-4a76-8a28-54d47f66b1fa","Type":"ContainerDied","Data":"c46752e9b7b26727ab1a278c44c87ac6fc14d7f5fda7f3332d47815dbae3ec19"} Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.276797 4812 scope.go:117] "RemoveContainer" containerID="85d3dc0a8dee9acb7c22340a3bf0d3b957f66b87aafd9d79d65ad740e1b3e73f" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.279169 4812 scope.go:117] "RemoveContainer" containerID="c46752e9b7b26727ab1a278c44c87ac6fc14d7f5fda7f3332d47815dbae3ec19" Feb 16 13:32:41 crc kubenswrapper[4812]: E0216 13:32:41.279891 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pzksg_openshift-ovn-kubernetes(a67ca714-af04-4a76-8a28-54d47f66b1fa)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.287399 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.287435 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.287468 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.287484 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.287496 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:41Z","lastTransitionTime":"2026-02-16T13:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.295388 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c55e49a-a30d-4950-a690-c33d9f8a31e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff320daab8e42aacfbe2cb2ef91f064cfd950840978ca9aaf3e11ea987dc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe6225f5f96d52b832cfbd8ecb30748dfcd053b9d95a512bfbe0c6c4aa7c0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6mn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:41Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.310726 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:41Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.323175 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2hhp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"934e533e-cc26-4770-af67-3dbcaa0dab5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a9a856d7be88b316dbac827e3998da5df55019101f11f2e6e7d763a78c4a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2xc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2hhp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:41Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.336945 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc49bb6ecf7f6197ff2e3e76e7e51bb0842fec86477b85c62d1930170ac91d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:41Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.349940 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"145eec20-9328-4b99-b0ec-4870b6761385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6366360c4a0ce6a9882aa517e034c45ef65a6ae8bf19f797f3ddaa99b8a76469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8g94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:41Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.359901 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gt4zb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e823c28d-cc96-469c-a794-fb12a7ae6172\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d34a9809d2ade6a3b1cb69520313bd3886db5c666762ca9ce993b5251331fa7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms9qd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f8618e97cfc7a7d6673243425c62cfd9633fca65005e96b63e924c303f7e5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms9qd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gt4zb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:41Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.371313 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f43b73-bad3-4883-ad0b-4a8df6824248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1001938f74f193628f1faf5fada6a26836e7053b9f46656912df4c17ac11a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5304f578aa4297d0e9e57669cb571d178fada6972553487f8f3e6a6a2422175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b778bd4c0e3d5d5158269f30b9646da76918d07d5a79e274b0b70ad1b4f38623\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94212f31a24fe0258d7c5241438e6ec1daf60b49bb87365fa851ae0a5628de1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:41Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.382885 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5w4kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f07f8fe-99f2-4f2e-b9f8-56841d756064\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6394bf9fe0ec8408dd8bfb0105dcc686f3a820850e75ce47db441f04d22c1c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5w4kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:41Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.389333 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.389384 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.389395 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.389410 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.389421 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:41Z","lastTransitionTime":"2026-02-16T13:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.393699 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:41Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.404190 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78ca0949b586a8839b02c0e01bd92571f3c4eaa156cfcd58546607c30cdbdcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:41Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.417102 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://367d830f42471f794953f9aa4cb979d9c3dac3ef0f2364661058c50cc1b0d055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f6091f13d04f859b29636375d3a27e07eeaee338f0da4a70b471f7f26250e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:41Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.428525 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:41Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.437853 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p9b2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a86c16e138e436ef3976c554ab3d70e39a7515f662e305558513a633db53142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2tjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p9b2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:41Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.455341 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67ca714-af04-4a76-8a28-54d47f66b1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c46752e9b7b26727ab1a278c44c87ac6fc14d7f5fda7f3332d47815dbae3ec19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d3dc0a8dee9acb7c22340a3bf0d3b957f66b87aafd9d79d65ad740e1b3e73f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T13:32:25Z\\\",\\\"message\\\":\\\".176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 13:32:25.052962 6255 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-5w4kf\\\\nI0216 13:32:25.052979 6255 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0216 13:32:25.052998 6255 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nF0216 13:32:25.053003 6255 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:25Z is\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c46752e9b7b26727ab1a278c44c87ac6fc14d7f5fda7f3332d47815dbae3ec19\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T13:32:40Z\\\",\\\"message\\\":\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-diagnostics/network-check-target\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.219\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0216 13:32:40.612720 6488 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:40Z \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzksg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:41Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.464662 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-szt79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a1f0c6-cafa-4c67-a2ad-d6003e88613c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md4ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md4ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-szt79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:41Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.475436 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b29081-a34f-4671-85a3-e1bc2b16d37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f92165c7178abd9e054861a9eb371353d050b922ba46c2a29027462832abff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 13:32:11.932704 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 13:32:11.933194 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 13:32:11.934094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1719573121/tls.crt::/tmp/serving-cert-1719573121/tls.key\\\\\\\"\\\\nI0216 13:32:12.377062 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 13:32:12.379783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 13:32:12.379801 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 13:32:12.379819 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 13:32:12.379825 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 13:32:12.383118 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 13:32:12.383163 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 13:32:12.383168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 13:32:12.383182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 13:32:12.383185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 13:32:12.383188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 13:32:12.384888 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:41Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.491866 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.491903 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.491914 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.491930 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.491942 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:41Z","lastTransitionTime":"2026-02-16T13:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.595063 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.595111 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.595127 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.595154 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.595166 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:41Z","lastTransitionTime":"2026-02-16T13:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.698473 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.698549 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.698564 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.698586 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.698603 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:41Z","lastTransitionTime":"2026-02-16T13:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.801269 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.801339 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.801366 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.801396 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.801419 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:41Z","lastTransitionTime":"2026-02-16T13:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.875135 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 18:36:14.853581061 +0000 UTC Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.878624 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.878629 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:32:41 crc kubenswrapper[4812]: E0216 13:32:41.878794 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:32:41 crc kubenswrapper[4812]: E0216 13:32:41.878913 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.891523 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f43b73-bad3-4883-ad0b-4a8df6824248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1001938f74f193628f1faf5fada6a26836e7053b9f46656912df4c17ac11a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5304f578aa4297d0e9e57669cb571d178fada6972553487f8f3e6a6a2422175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b778bd4c0e3d5d5158269f30b9646da76918d07d5a79e274b0b70ad1b4f38623\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94212f31a24fe0258d7c5241438e6ec1daf60b49bb87365fa851ae0a5628de1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:41Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.903474 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.903965 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.904040 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.904101 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.904157 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:41Z","lastTransitionTime":"2026-02-16T13:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.903925 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc49bb6ecf7f6197ff2e3e76e7e51bb0842fec86477b85c62d1930170ac91d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:41Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.920042 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"145eec20-9328-4b99-b0ec-4870b6761385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6366360c4a0ce6a9882aa517e034c45ef65a6ae8bf19f797f3ddaa99b8a76469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8g94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:41Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.931715 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gt4zb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e823c28d-cc96-469c-a794-fb12a7ae6172\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d34a9809d2ade6a3b1cb69520313bd3886db5c666762ca9ce993b5251331fa7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms9qd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f8618e97cfc7a7d6673243425c62cfd9633fca65005e96b63e924c303f7e5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms9qd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gt4zb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:41Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.943370 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:41Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.956004 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5w4kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f07f8fe-99f2-4f2e-b9f8-56841d756064\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6394bf9fe0ec8408dd8bfb0105dcc686f3a820850e75ce47db441f04d22c1c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5w4kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:41Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.967640 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p9b2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a86c16e138e436ef3976c554ab3d70e39a7515f662e305558513a633db53142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2tjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p9b2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:41Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:41 crc kubenswrapper[4812]: I0216 13:32:41.989984 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67ca714-af04-4a76-8a28-54d47f66b1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c46752e9b7b26727ab1a278c44c87ac6fc14d7f5fda7f3332d47815dbae3ec19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d3dc0a8dee9acb7c22340a3bf0d3b957f66b87aafd9d79d65ad740e1b3e73f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T13:32:25Z\\\",\\\"message\\\":\\\".176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 13:32:25.052962 6255 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-5w4kf\\\\nI0216 13:32:25.052979 6255 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0216 13:32:25.052998 6255 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nF0216 13:32:25.053003 6255 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:25Z is\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c46752e9b7b26727ab1a278c44c87ac6fc14d7f5fda7f3332d47815dbae3ec19\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T13:32:40Z\\\",\\\"message\\\":\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-diagnostics/network-check-target\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.219\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0216 13:32:40.612720 6488 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:40Z \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzksg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:41Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.002910 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-szt79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a1f0c6-cafa-4c67-a2ad-d6003e88613c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md4ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md4ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-szt79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:42Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.006891 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.007012 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.007091 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.007183 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.007257 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:42Z","lastTransitionTime":"2026-02-16T13:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.016919 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b29081-a34f-4671-85a3-e1bc2b16d37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f92165c7178abd9e054861a9eb371353d050b922ba46c2a29027462832abff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 13:32:11.932704 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 13:32:11.933194 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 13:32:11.934094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1719573121/tls.crt::/tmp/serving-cert-1719573121/tls.key\\\\\\\"\\\\nI0216 13:32:12.377062 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 13:32:12.379783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 13:32:12.379801 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 13:32:12.379819 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 13:32:12.379825 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 13:32:12.383118 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 13:32:12.383163 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 13:32:12.383168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 13:32:12.383182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 13:32:12.383185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 13:32:12.383188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 13:32:12.384888 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:42Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.026680 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78ca0949b586a8839b02c0e01bd92571f3c4eaa156cfcd58546607c30cdbdcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:42Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.038034 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://367d830f42471f794953f9aa4cb979d9c3dac3ef0f2364661058c50cc1b0d055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f6091f13d04f859b29636375d3a27e07eeaee338f0da4a70b471f7f26250e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:42Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.048926 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:42Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.058884 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:42Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.069929 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2hhp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"934e533e-cc26-4770-af67-3dbcaa0dab5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a9a856d7be88b316dbac827e3998da5df55019101f11f2e6e7d763a78c4a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2xc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2hhp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:42Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.080301 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c55e49a-a30d-4950-a690-c33d9f8a31e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff320daab8e42aacfbe2cb2ef91f064cfd950840978ca9aaf3e11ea987dc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe6225f5f96d52b832cfbd8ecb30748dfcd053b9d95a512bfbe0c6c4aa7c0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6mn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:42Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.109184 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.109231 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.109243 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.109260 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.109271 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:42Z","lastTransitionTime":"2026-02-16T13:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.210743 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.211036 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.211151 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.211286 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.211406 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:42Z","lastTransitionTime":"2026-02-16T13:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.280683 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pzksg_a67ca714-af04-4a76-8a28-54d47f66b1fa/ovnkube-controller/2.log" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.283755 4812 scope.go:117] "RemoveContainer" containerID="c46752e9b7b26727ab1a278c44c87ac6fc14d7f5fda7f3332d47815dbae3ec19" Feb 16 13:32:42 crc kubenswrapper[4812]: E0216 13:32:42.284019 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pzksg_openshift-ovn-kubernetes(a67ca714-af04-4a76-8a28-54d47f66b1fa)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.297187 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:42Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.307149 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p9b2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a86c16e138e436ef3976c554ab3d70e39a7515f662e305558513a633db53142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2tjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p9b2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:42Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.313426 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.313599 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.313687 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.313791 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.313866 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:42Z","lastTransitionTime":"2026-02-16T13:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.324004 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67ca714-af04-4a76-8a28-54d47f66b1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c46752e9b7b26727ab1a278c44c87ac6fc14d7f5fda7f3332d47815dbae3ec19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c46752e9b7b26727ab1a278c44c87ac6fc14d7f5fda7f3332d47815dbae3ec19\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T13:32:40Z\\\",\\\"message\\\":\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-diagnostics/network-check-target\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.219\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0216 13:32:40.612720 6488 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:40Z \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pzksg_openshift-ovn-kubernetes(a67ca714-af04-4a76-8a28-54d47f66b1fa)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzksg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:42Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.334057 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-szt79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a1f0c6-cafa-4c67-a2ad-d6003e88613c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md4ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md4ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-szt79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:42Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.345647 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b29081-a34f-4671-85a3-e1bc2b16d37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f92165c7178abd9e054861a9eb371353d050b922ba46c2a29027462832abff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 13:32:11.932704 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 13:32:11.933194 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 13:32:11.934094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1719573121/tls.crt::/tmp/serving-cert-1719573121/tls.key\\\\\\\"\\\\nI0216 13:32:12.377062 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 13:32:12.379783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 13:32:12.379801 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 13:32:12.379819 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 13:32:12.379825 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 13:32:12.383118 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 13:32:12.383163 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 13:32:12.383168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 13:32:12.383182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 13:32:12.383185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 13:32:12.383188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 13:32:12.384888 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:42Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.356780 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78ca0949b586a8839b02c0e01bd92571f3c4eaa156cfcd58546607c30cdbdcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:42Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.368511 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://367d830f42471f794953f9aa4cb979d9c3dac3ef0f2364661058c50cc1b0d055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f6091f13d04f859b29636375d3a27e07eeaee338f0da4a70b471f7f26250e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:42Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.382071 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:42Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.400399 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2hhp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"934e533e-cc26-4770-af67-3dbcaa0dab5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a9a856d7be88b316dbac827e3998da5df55019101f11f2e6e7d763a78c4a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2xc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2hhp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:42Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.413613 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c55e49a-a30d-4950-a690-c33d9f8a31e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff320daab8e42aacfbe2cb2ef91f064cfd950840978ca9aaf3e11ea987dc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe6225f5f96d52b832cfbd8ecb30748dfcd053b9d95a512bfbe0c6c4aa7c0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6mn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:42Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.416669 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.416698 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.416709 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.416725 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.416737 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:42Z","lastTransitionTime":"2026-02-16T13:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.431306 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gt4zb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e823c28d-cc96-469c-a794-fb12a7ae6172\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d34a9809d2ade6a3b1cb69520313bd3886db5c666762ca9ce993b5251331fa7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms9qd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f8618e97cfc7a7d6673243425c62cfd9633fca65005e96b63e924c303f7e5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms9qd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gt4zb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:42Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.443653 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f43b73-bad3-4883-ad0b-4a8df6824248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1001938f74f193628f1faf5fada6a26836e7053b9f46656912df4c17ac11a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5304f578aa4297d0e9e57669cb571d178fada6972553487f8f3e6a6a2422175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b778bd4c0e3d5d5158269f30b9646da76918d07d5a79e274b0b70ad1b4f38623\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94212f31a24fe0258d7c5241438e6ec1daf60b49bb87365fa851ae0a5628de1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:42Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.462323 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc49bb6ecf7f6197ff2e3e76e7e51bb0842fec86477b85c62d1930170ac91d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:42Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.477271 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"145eec20-9328-4b99-b0ec-4870b6761385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6366360c4a0ce6a9882aa517e034c45ef65a6ae8bf19f797f3ddaa99b8a76469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8g94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:42Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.490886 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:42Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.503360 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5w4kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f07f8fe-99f2-4f2e-b9f8-56841d756064\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6394bf9fe0ec8408dd8bfb0105dcc686f3a820850e75ce47db441f04d22c1c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5w4kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:42Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.519092 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.519131 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.519142 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.519157 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.519166 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:42Z","lastTransitionTime":"2026-02-16T13:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.622146 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.622217 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.622240 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.622266 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.622287 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:42Z","lastTransitionTime":"2026-02-16T13:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.725245 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.725294 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.725310 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.725334 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.725350 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:42Z","lastTransitionTime":"2026-02-16T13:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.828095 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.828147 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.828158 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.828176 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.828188 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:42Z","lastTransitionTime":"2026-02-16T13:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.875965 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 10:11:23.859375932 +0000 UTC Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.878330 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.878353 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:32:42 crc kubenswrapper[4812]: E0216 13:32:42.878461 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:32:42 crc kubenswrapper[4812]: E0216 13:32:42.878556 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.930667 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.931098 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.931309 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.931582 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:42 crc kubenswrapper[4812]: I0216 13:32:42.931782 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:42Z","lastTransitionTime":"2026-02-16T13:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.034820 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.035102 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.035176 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.035256 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.035327 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:43Z","lastTransitionTime":"2026-02-16T13:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.138178 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.138420 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.138515 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.138585 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.138650 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:43Z","lastTransitionTime":"2026-02-16T13:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.241154 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.241205 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.241217 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.241233 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.241246 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:43Z","lastTransitionTime":"2026-02-16T13:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.343928 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.343959 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.343970 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.343986 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.343996 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:43Z","lastTransitionTime":"2026-02-16T13:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.446889 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.446918 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.446928 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.446941 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.446950 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:43Z","lastTransitionTime":"2026-02-16T13:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.550149 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.550216 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.550242 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.550271 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.550293 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:43Z","lastTransitionTime":"2026-02-16T13:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.653312 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.653360 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.653372 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.653387 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.653401 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:43Z","lastTransitionTime":"2026-02-16T13:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.756618 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.756660 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.756671 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.756740 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.756753 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:43Z","lastTransitionTime":"2026-02-16T13:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.770592 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d2a1f0c6-cafa-4c67-a2ad-d6003e88613c-metrics-certs\") pod \"network-metrics-daemon-szt79\" (UID: \"d2a1f0c6-cafa-4c67-a2ad-d6003e88613c\") " pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:32:43 crc kubenswrapper[4812]: E0216 13:32:43.770884 4812 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 13:32:43 crc kubenswrapper[4812]: E0216 13:32:43.771006 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2a1f0c6-cafa-4c67-a2ad-d6003e88613c-metrics-certs podName:d2a1f0c6-cafa-4c67-a2ad-d6003e88613c nodeName:}" failed. No retries permitted until 2026-02-16 13:32:59.770968486 +0000 UTC m=+68.835299237 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d2a1f0c6-cafa-4c67-a2ad-d6003e88613c-metrics-certs") pod "network-metrics-daemon-szt79" (UID: "d2a1f0c6-cafa-4c67-a2ad-d6003e88613c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.859175 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.859214 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.859223 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.859238 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.859247 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:43Z","lastTransitionTime":"2026-02-16T13:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.876262 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 00:55:51.564387289 +0000 UTC Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.878732 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:32:43 crc kubenswrapper[4812]: E0216 13:32:43.878861 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.878735 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:32:43 crc kubenswrapper[4812]: E0216 13:32:43.879105 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.962292 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.962359 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.962386 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.962416 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:43 crc kubenswrapper[4812]: I0216 13:32:43.962439 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:43Z","lastTransitionTime":"2026-02-16T13:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.064378 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.064664 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.064781 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.064902 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.065000 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:44Z","lastTransitionTime":"2026-02-16T13:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.167154 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.167420 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.167583 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.167703 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.167830 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:44Z","lastTransitionTime":"2026-02-16T13:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.262760 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.270403 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.270438 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.270463 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.270485 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.270496 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:44Z","lastTransitionTime":"2026-02-16T13:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.271949 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.276485 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://367d830f42471f794953f9aa4cb979d9c3dac3ef0f2364661058c50cc1b0d055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f6091f13d04f859b29636375d3a27e07eeaee338f0da4a70b471f7f26250e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:44Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.289515 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:44Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.300998 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p9b2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a86c16e138e436ef3976c554ab3d70e39a7515f662e305558513a633db53142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2tjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p9b2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:44Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.318143 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67ca714-af04-4a76-8a28-54d47f66b1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c46752e9b7b26727ab1a278c44c87ac6fc14d7f5fda7f3332d47815dbae3ec19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c46752e9b7b26727ab1a278c44c87ac6fc14d7f5fda7f3332d47815dbae3ec19\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T13:32:40Z\\\",\\\"message\\\":\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-diagnostics/network-check-target\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.219\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0216 13:32:40.612720 6488 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:40Z \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pzksg_openshift-ovn-kubernetes(a67ca714-af04-4a76-8a28-54d47f66b1fa)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzksg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:44Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.327500 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-szt79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a1f0c6-cafa-4c67-a2ad-d6003e88613c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md4ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md4ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-szt79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:44Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.338123 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b29081-a34f-4671-85a3-e1bc2b16d37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f92165c7178abd9e054861a9eb371353d050b922ba46c2a29027462832abff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 13:32:11.932704 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 13:32:11.933194 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 13:32:11.934094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1719573121/tls.crt::/tmp/serving-cert-1719573121/tls.key\\\\\\\"\\\\nI0216 13:32:12.377062 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 13:32:12.379783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 13:32:12.379801 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 13:32:12.379819 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 13:32:12.379825 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 13:32:12.383118 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 13:32:12.383163 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 13:32:12.383168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 13:32:12.383182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 13:32:12.383185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 13:32:12.383188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 13:32:12.384888 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:44Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.347514 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78ca0949b586a8839b02c0e01bd92571f3c4eaa156cfcd58546607c30cdbdcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:44Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.358733 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:44Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.370775 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2hhp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"934e533e-cc26-4770-af67-3dbcaa0dab5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a9a856d7be88b316dbac827e3998da5df55019101f11f2e6e7d763a78c4a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2xc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2hhp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:44Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.372266 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.372332 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.372350 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.372373 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.372389 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:44Z","lastTransitionTime":"2026-02-16T13:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.381630 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c55e49a-a30d-4950-a690-c33d9f8a31e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff320daab8e42aacfbe2cb2ef91f064cfd950840978ca9aaf3e11ea987dc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe6225f5f96d52b832cfbd8ecb30748dfcd053b9d95a512bfbe0c6c4aa7c0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6mn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:44Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.394749 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"145eec20-9328-4b99-b0ec-4870b6761385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6366360c4a0ce6a9882aa517e034c45ef65a6ae8bf19f797f3ddaa99b8a76469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8g94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:44Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.407599 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gt4zb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e823c28d-cc96-469c-a794-fb12a7ae6172\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d34a9809d2ade6a3b1cb69520313bd3886db5c666762ca9ce993b5251331fa7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms9qd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f8618e97cfc7a7d6673243425c62cfd9633fca65005e96b63e924c303f7e5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms9qd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gt4zb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:44Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.419682 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f43b73-bad3-4883-ad0b-4a8df6824248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1001938f74f193628f1faf5fada6a26836e7053b9f46656912df4c17ac11a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5304f578aa4297d0e9e57669cb571d178fada6972553487f8f3e6a6a2422175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b778bd4c0e3d5d5158269f30b9646da76918d07d5a79e274b0b70ad1b4f38623\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94212f31a24fe0258d7c5241438e6ec1daf60b49bb87365fa851ae0a5628de1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:44Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.432859 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc49bb6ecf7f6197ff2e3e76e7e51bb0842fec86477b85c62d1930170ac91d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:44Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.444307 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:44Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.452395 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5w4kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f07f8fe-99f2-4f2e-b9f8-56841d756064\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6394bf9fe0ec8408dd8bfb0105dcc686f3a820850e75ce47db441f04d22c1c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5w4kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:44Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.475146 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.475183 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.475195 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.475209 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.475221 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:44Z","lastTransitionTime":"2026-02-16T13:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.577886 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.577933 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.577938 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.577949 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.578093 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:44 crc kubenswrapper[4812]: E0216 13:32:44.578101 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:33:16.578078434 +0000 UTC m=+85.642409145 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.578116 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:44Z","lastTransitionTime":"2026-02-16T13:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.678772 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.678849 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.678875 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.678932 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:32:44 crc kubenswrapper[4812]: E0216 13:32:44.678960 4812 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 13:32:44 crc kubenswrapper[4812]: E0216 13:32:44.679074 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 13:33:16.679031227 +0000 UTC m=+85.743361918 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 13:32:44 crc kubenswrapper[4812]: E0216 13:32:44.679311 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 13:32:44 crc kubenswrapper[4812]: E0216 13:32:44.679327 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 13:32:44 crc kubenswrapper[4812]: E0216 13:32:44.679326 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 13:32:44 crc kubenswrapper[4812]: E0216 13:32:44.679366 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 13:32:44 crc kubenswrapper[4812]: E0216 13:32:44.679384 4812 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 13:32:44 crc kubenswrapper[4812]: E0216 13:32:44.679338 4812 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 13:32:44 crc kubenswrapper[4812]: E0216 13:32:44.679478 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 13:33:16.679430739 +0000 UTC m=+85.743761490 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 13:32:44 crc kubenswrapper[4812]: E0216 13:32:44.679534 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 13:33:16.679513761 +0000 UTC m=+85.743844462 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 13:32:44 crc kubenswrapper[4812]: E0216 13:32:44.679863 4812 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 13:32:44 crc kubenswrapper[4812]: E0216 13:32:44.679927 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 13:33:16.679918183 +0000 UTC m=+85.744248884 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.681076 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.681107 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.681116 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.681129 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.681138 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:44Z","lastTransitionTime":"2026-02-16T13:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.783839 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.783890 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.783900 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.783914 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.783922 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:44Z","lastTransitionTime":"2026-02-16T13:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.877308 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 10:55:25.687642965 +0000 UTC Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.878845 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.878855 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:32:44 crc kubenswrapper[4812]: E0216 13:32:44.879057 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:32:44 crc kubenswrapper[4812]: E0216 13:32:44.879301 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.887366 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.887405 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.887418 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.887436 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.887470 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:44Z","lastTransitionTime":"2026-02-16T13:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.989539 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.989571 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.989580 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.989593 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:44 crc kubenswrapper[4812]: I0216 13:32:44.989601 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:44Z","lastTransitionTime":"2026-02-16T13:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.092015 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.092213 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.092221 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.092234 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.092242 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:45Z","lastTransitionTime":"2026-02-16T13:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.194212 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.194243 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.194262 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.194278 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.194289 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:45Z","lastTransitionTime":"2026-02-16T13:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.296613 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.296648 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.296656 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.296671 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.296679 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:45Z","lastTransitionTime":"2026-02-16T13:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.399195 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.399247 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.399256 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.399270 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.399279 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:45Z","lastTransitionTime":"2026-02-16T13:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.502248 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.502322 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.502335 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.502356 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.502371 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:45Z","lastTransitionTime":"2026-02-16T13:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.605297 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.605332 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.605361 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.605377 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.605386 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:45Z","lastTransitionTime":"2026-02-16T13:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.708789 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.708847 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.708860 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.708878 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.708890 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:45Z","lastTransitionTime":"2026-02-16T13:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.811493 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.811568 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.811578 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.811590 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.811599 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:45Z","lastTransitionTime":"2026-02-16T13:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.877918 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 12:15:20.141122832 +0000 UTC Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.878054 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.878123 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:32:45 crc kubenswrapper[4812]: E0216 13:32:45.878175 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:32:45 crc kubenswrapper[4812]: E0216 13:32:45.878287 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.913900 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.913943 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.913953 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.913969 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:45 crc kubenswrapper[4812]: I0216 13:32:45.913981 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:45Z","lastTransitionTime":"2026-02-16T13:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.017003 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.017059 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.017071 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.017090 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.017102 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:46Z","lastTransitionTime":"2026-02-16T13:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.120101 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.120140 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.120152 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.120167 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.120176 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:46Z","lastTransitionTime":"2026-02-16T13:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.223235 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.223320 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.223339 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.223365 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.223399 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:46Z","lastTransitionTime":"2026-02-16T13:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.325433 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.325543 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.325564 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.325589 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.325607 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:46Z","lastTransitionTime":"2026-02-16T13:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.427964 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.428014 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.428026 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.428043 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.428055 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:46Z","lastTransitionTime":"2026-02-16T13:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.530463 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.530506 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.530524 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.530541 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.530554 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:46Z","lastTransitionTime":"2026-02-16T13:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.633344 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.633396 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.633407 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.633426 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.633455 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:46Z","lastTransitionTime":"2026-02-16T13:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.735732 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.735775 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.735790 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.735810 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.735824 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:46Z","lastTransitionTime":"2026-02-16T13:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.838334 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.838541 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.838556 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.838569 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.838577 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:46Z","lastTransitionTime":"2026-02-16T13:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.878899 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.878978 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.878900 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 01:52:24.509821752 +0000 UTC Feb 16 13:32:46 crc kubenswrapper[4812]: E0216 13:32:46.879036 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:32:46 crc kubenswrapper[4812]: E0216 13:32:46.879152 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.941140 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.941181 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.941193 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.941210 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:46 crc kubenswrapper[4812]: I0216 13:32:46.941221 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:46Z","lastTransitionTime":"2026-02-16T13:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.044687 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.044734 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.044747 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.044765 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.044776 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:47Z","lastTransitionTime":"2026-02-16T13:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.148111 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.148141 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.148151 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.148165 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.148176 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:47Z","lastTransitionTime":"2026-02-16T13:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.250384 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.250459 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.250471 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.250492 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.250504 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:47Z","lastTransitionTime":"2026-02-16T13:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.353099 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.353317 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.353429 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.353541 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.353612 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:47Z","lastTransitionTime":"2026-02-16T13:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.456117 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.456663 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.456747 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.456829 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.456897 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:47Z","lastTransitionTime":"2026-02-16T13:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.559824 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.559865 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.559877 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.559893 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.559903 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:47Z","lastTransitionTime":"2026-02-16T13:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.662540 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.662573 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.662586 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.662602 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.662612 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:47Z","lastTransitionTime":"2026-02-16T13:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.764557 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.764606 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.764619 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.764635 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.764648 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:47Z","lastTransitionTime":"2026-02-16T13:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.868033 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.868077 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.868093 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.868108 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.868119 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:47Z","lastTransitionTime":"2026-02-16T13:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.877979 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.878024 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:32:47 crc kubenswrapper[4812]: E0216 13:32:47.878153 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:32:47 crc kubenswrapper[4812]: E0216 13:32:47.878340 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.879234 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 12:44:46.618029671 +0000 UTC Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.971374 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.971436 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.971474 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.971496 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:47 crc kubenswrapper[4812]: I0216 13:32:47.971511 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:47Z","lastTransitionTime":"2026-02-16T13:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.074388 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.074488 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.074514 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.074543 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.074565 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:48Z","lastTransitionTime":"2026-02-16T13:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.176952 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.177017 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.177030 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.177047 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.177061 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:48Z","lastTransitionTime":"2026-02-16T13:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.279479 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.279509 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.279518 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.279531 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.279540 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:48Z","lastTransitionTime":"2026-02-16T13:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.381316 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.381354 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.381364 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.381380 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.381391 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:48Z","lastTransitionTime":"2026-02-16T13:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.483747 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.483787 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.483796 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.483811 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.483819 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:48Z","lastTransitionTime":"2026-02-16T13:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.586017 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.586086 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.586109 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.586176 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.586202 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:48Z","lastTransitionTime":"2026-02-16T13:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.688627 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.688702 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.688726 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.688761 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.688784 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:48Z","lastTransitionTime":"2026-02-16T13:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.791394 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.791478 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.791491 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.791507 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.791518 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:48Z","lastTransitionTime":"2026-02-16T13:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.878053 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.878061 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:32:48 crc kubenswrapper[4812]: E0216 13:32:48.878281 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:32:48 crc kubenswrapper[4812]: E0216 13:32:48.878519 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.880193 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 16:28:32.272171728 +0000 UTC Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.894827 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.894892 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.894907 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.894929 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.894944 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:48Z","lastTransitionTime":"2026-02-16T13:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.967201 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.967234 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.967242 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.967255 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.967264 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:48Z","lastTransitionTime":"2026-02-16T13:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:48 crc kubenswrapper[4812]: E0216 13:32:48.978582 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4981c762-995f-430f-ab9d-bca26618d78a\\\",\\\"systemUUID\\\":\\\"a8093dd5-8447-4cfc-ac6f-47d191141ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:48Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.981569 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.981640 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.981654 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.981671 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.981682 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:48Z","lastTransitionTime":"2026-02-16T13:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:48 crc kubenswrapper[4812]: E0216 13:32:48.993623 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4981c762-995f-430f-ab9d-bca26618d78a\\\",\\\"systemUUID\\\":\\\"a8093dd5-8447-4cfc-ac6f-47d191141ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:48Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.996690 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.996723 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.996733 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.996749 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:48 crc kubenswrapper[4812]: I0216 13:32:48.996760 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:48Z","lastTransitionTime":"2026-02-16T13:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:49 crc kubenswrapper[4812]: E0216 13:32:49.006634 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4981c762-995f-430f-ab9d-bca26618d78a\\\",\\\"systemUUID\\\":\\\"a8093dd5-8447-4cfc-ac6f-47d191141ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:49Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.009979 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.010019 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.010029 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.010044 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.010053 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:49Z","lastTransitionTime":"2026-02-16T13:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:49 crc kubenswrapper[4812]: E0216 13:32:49.019961 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4981c762-995f-430f-ab9d-bca26618d78a\\\",\\\"systemUUID\\\":\\\"a8093dd5-8447-4cfc-ac6f-47d191141ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:49Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.023167 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.023202 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.023213 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.023227 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.023235 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:49Z","lastTransitionTime":"2026-02-16T13:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:49 crc kubenswrapper[4812]: E0216 13:32:49.035345 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4981c762-995f-430f-ab9d-bca26618d78a\\\",\\\"systemUUID\\\":\\\"a8093dd5-8447-4cfc-ac6f-47d191141ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:49Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:49 crc kubenswrapper[4812]: E0216 13:32:49.035496 4812 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.037115 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.037147 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.037158 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.037174 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.037186 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:49Z","lastTransitionTime":"2026-02-16T13:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.139524 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.139557 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.139565 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.139578 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.139587 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:49Z","lastTransitionTime":"2026-02-16T13:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.242223 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.242579 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.242692 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.242772 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.243001 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:49Z","lastTransitionTime":"2026-02-16T13:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.345573 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.345640 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.345656 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.345680 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.345704 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:49Z","lastTransitionTime":"2026-02-16T13:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.448571 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.448633 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.448652 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.448676 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.448694 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:49Z","lastTransitionTime":"2026-02-16T13:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.551976 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.552079 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.552100 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.552126 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.552147 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:49Z","lastTransitionTime":"2026-02-16T13:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.656186 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.656247 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.656259 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.656275 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.656310 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:49Z","lastTransitionTime":"2026-02-16T13:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.759592 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.759656 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.759676 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.759707 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.759726 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:49Z","lastTransitionTime":"2026-02-16T13:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.862260 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.862305 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.862319 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.862336 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.862352 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:49Z","lastTransitionTime":"2026-02-16T13:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.878861 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.878918 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:32:49 crc kubenswrapper[4812]: E0216 13:32:49.879059 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:32:49 crc kubenswrapper[4812]: E0216 13:32:49.879182 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.880492 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 11:16:56.159410377 +0000 UTC Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.965282 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.965328 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.965344 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.965365 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:49 crc kubenswrapper[4812]: I0216 13:32:49.965393 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:49Z","lastTransitionTime":"2026-02-16T13:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.068876 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.068948 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.068972 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.069002 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.069025 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:50Z","lastTransitionTime":"2026-02-16T13:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.172660 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.172722 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.172739 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.172761 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.172785 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:50Z","lastTransitionTime":"2026-02-16T13:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.275525 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.275562 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.275571 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.275584 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.275593 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:50Z","lastTransitionTime":"2026-02-16T13:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.378302 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.378465 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.378486 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.378509 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.378527 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:50Z","lastTransitionTime":"2026-02-16T13:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.481893 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.481979 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.482001 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.482033 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.482055 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:50Z","lastTransitionTime":"2026-02-16T13:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.585583 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.585652 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.585673 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.585701 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.585723 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:50Z","lastTransitionTime":"2026-02-16T13:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.689388 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.689422 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.689430 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.689461 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.689470 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:50Z","lastTransitionTime":"2026-02-16T13:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.791881 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.791979 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.792004 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.792040 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.792069 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:50Z","lastTransitionTime":"2026-02-16T13:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.878046 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.878164 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:32:50 crc kubenswrapper[4812]: E0216 13:32:50.878269 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:32:50 crc kubenswrapper[4812]: E0216 13:32:50.878366 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.881068 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 10:27:05.183430184 +0000 UTC Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.894868 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.895497 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.895542 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.895562 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.895574 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:50Z","lastTransitionTime":"2026-02-16T13:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.999171 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.999226 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.999242 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.999264 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:50 crc kubenswrapper[4812]: I0216 13:32:50.999281 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:50Z","lastTransitionTime":"2026-02-16T13:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.102104 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.102157 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.102181 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.102213 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.102236 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:51Z","lastTransitionTime":"2026-02-16T13:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.205171 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.205228 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.205248 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.205272 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.205290 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:51Z","lastTransitionTime":"2026-02-16T13:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.308618 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.309010 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.309188 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.309839 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.310013 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:51Z","lastTransitionTime":"2026-02-16T13:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.412304 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.412592 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.412738 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.412865 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.412987 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:51Z","lastTransitionTime":"2026-02-16T13:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.524887 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.524950 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.524969 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.524992 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.525011 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:51Z","lastTransitionTime":"2026-02-16T13:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.628621 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.628676 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.628693 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.628716 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.628733 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:51Z","lastTransitionTime":"2026-02-16T13:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.732187 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.732226 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.732235 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.732252 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.732261 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:51Z","lastTransitionTime":"2026-02-16T13:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.834035 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.834267 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.834335 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.834424 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.834532 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:51Z","lastTransitionTime":"2026-02-16T13:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.878758 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.878785 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:32:51 crc kubenswrapper[4812]: E0216 13:32:51.878899 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:32:51 crc kubenswrapper[4812]: E0216 13:32:51.879100 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.881966 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 19:32:06.993017273 +0000 UTC Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.901764 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f43b73-bad3-4883-ad0b-4a8df6824248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1001938f74f193628f1faf5fada6a26836e7053b9f46656912df4c17ac11a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5304f578aa4297d0e9e57669cb571d178fada6972553487f8f3e6a6a2422175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b778bd4c0e3d5d5158269f30b9646da76918d07d5a79e274b0b70ad1b4f38623\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94212f31a24fe0258d7c5241438e6ec1daf60b49bb87365fa851ae0a5628de1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:51Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.918113 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc49bb6ecf7f6197ff2e3e76e7e51bb0842fec86477b85c62d1930170ac91d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:51Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.938011 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.938068 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.938082 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.938098 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.938108 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:51Z","lastTransitionTime":"2026-02-16T13:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.938308 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"145eec20-9328-4b99-b0ec-4870b6761385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6366360c4a0ce6a9882aa517e034c45ef65a6ae8bf19f797f3ddaa99b8a76469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8g94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:51Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.949361 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gt4zb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e823c28d-cc96-469c-a794-fb12a7ae6172\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d34a9809d2ade6a3b1cb69520313bd3886db5c666762ca9ce993b5251331fa7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms9qd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f8618e97cfc7a7d6673243425c62cfd9633fca65005e96b63e924c303f7e5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms9qd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gt4zb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:51Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.961161 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4574e2db-75d7-4da6-bdf8-84a06c617799\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85905db3e100d71dfb29420eccfd9a129be4b9a6950a8e5e2915d7f8aabcc255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858f53f244902f66ee53409db591138aba707c545b1f7cc0da69a691be1e2138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3e9624fe4d351638e9b45a1d575c06a3c9e7e12a77dcd8cb6a61996fe51fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cd9201d7dd9b07b8b725890b21f0f06ee3dd3ff839e3fc6c2f4b9d4d736c01e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cd9201d7dd9b07b8b725890b21f0f06ee3dd3ff839e3fc6c2f4b9d4d736c01e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:51Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.975833 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:51Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:51 crc kubenswrapper[4812]: I0216 13:32:51.990010 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5w4kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f07f8fe-99f2-4f2e-b9f8-56841d756064\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6394bf9fe0ec8408dd8bfb0105dcc686f3a820850e75ce47db441f04d22c1c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5w4kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:51Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.002166 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-szt79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a1f0c6-cafa-4c67-a2ad-d6003e88613c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md4ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md4ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-szt79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:51Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.021393 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b29081-a34f-4671-85a3-e1bc2b16d37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f92165c7178abd9e054861a9eb371353d050b922ba46c2a29027462832abff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 13:32:11.932704 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 13:32:11.933194 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 13:32:11.934094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1719573121/tls.crt::/tmp/serving-cert-1719573121/tls.key\\\\\\\"\\\\nI0216 13:32:12.377062 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 13:32:12.379783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 13:32:12.379801 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 13:32:12.379819 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 13:32:12.379825 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 13:32:12.383118 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 13:32:12.383163 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 13:32:12.383168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 13:32:12.383182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 13:32:12.383185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 13:32:12.383188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 13:32:12.384888 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:52Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.033014 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78ca0949b586a8839b02c0e01bd92571f3c4eaa156cfcd58546607c30cdbdcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:52Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.040106 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.040138 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.040146 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.040159 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.040168 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:52Z","lastTransitionTime":"2026-02-16T13:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.046542 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://367d830f42471f794953f9aa4cb979d9c3dac3ef0f2364661058c50cc1b0d055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f6091f13d04f859b29636375d3a27e07eeaee338f0da4a70b471f7f26250e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:52Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.058775 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:52Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.067099 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p9b2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a86c16e138e436ef3976c554ab3d70e39a7515f662e305558513a633db53142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2tjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p9b2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:52Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.105835 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67ca714-af04-4a76-8a28-54d47f66b1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c46752e9b7b26727ab1a278c44c87ac6fc14d7f5fda7f3332d47815dbae3ec19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c46752e9b7b26727ab1a278c44c87ac6fc14d7f5fda7f3332d47815dbae3ec19\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T13:32:40Z\\\",\\\"message\\\":\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-diagnostics/network-check-target\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.219\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0216 13:32:40.612720 6488 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:40Z \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pzksg_openshift-ovn-kubernetes(a67ca714-af04-4a76-8a28-54d47f66b1fa)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzksg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:52Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.124179 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:52Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.142262 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2hhp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"934e533e-cc26-4770-af67-3dbcaa0dab5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a9a856d7be88b316dbac827e3998da5df55019101f11f2e6e7d763a78c4a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2xc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2hhp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:52Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.142725 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.142759 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.142773 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.142791 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.142804 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:52Z","lastTransitionTime":"2026-02-16T13:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.153763 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c55e49a-a30d-4950-a690-c33d9f8a31e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff320daab8e42aacfbe2cb2ef91f064cfd950840978ca9aaf3e11ea987dc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe6225f5f96d52b832cfbd8ecb30748dfcd053b9d95a512bfbe0c6c4aa7c0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6mn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:52Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.245004 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.245043 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.245057 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.245073 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.245086 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:52Z","lastTransitionTime":"2026-02-16T13:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.347360 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.347414 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.347424 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.347482 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.347501 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:52Z","lastTransitionTime":"2026-02-16T13:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.450585 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.450667 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.450683 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.450700 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.450712 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:52Z","lastTransitionTime":"2026-02-16T13:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.554002 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.554049 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.554060 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.554075 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.554086 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:52Z","lastTransitionTime":"2026-02-16T13:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.656721 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.656779 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.656789 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.656809 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.656822 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:52Z","lastTransitionTime":"2026-02-16T13:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.759759 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.759800 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.759831 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.759868 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.759880 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:52Z","lastTransitionTime":"2026-02-16T13:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.862398 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.862493 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.862512 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.862533 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.862546 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:52Z","lastTransitionTime":"2026-02-16T13:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.878031 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.878082 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:32:52 crc kubenswrapper[4812]: E0216 13:32:52.878232 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:32:52 crc kubenswrapper[4812]: E0216 13:32:52.878394 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.882984 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 18:14:27.490949467 +0000 UTC Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.965897 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.965940 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.965952 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.965968 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:52 crc kubenswrapper[4812]: I0216 13:32:52.965978 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:52Z","lastTransitionTime":"2026-02-16T13:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.067933 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.067982 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.067992 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.068006 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.068015 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:53Z","lastTransitionTime":"2026-02-16T13:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.170698 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.170735 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.170744 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.170758 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.170767 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:53Z","lastTransitionTime":"2026-02-16T13:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.273580 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.274081 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.274111 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.274140 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.274168 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:53Z","lastTransitionTime":"2026-02-16T13:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.377777 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.377836 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.377855 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.377880 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.377897 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:53Z","lastTransitionTime":"2026-02-16T13:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.481067 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.481117 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.481134 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.481152 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.481166 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:53Z","lastTransitionTime":"2026-02-16T13:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.583306 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.583364 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.583380 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.583405 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.583420 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:53Z","lastTransitionTime":"2026-02-16T13:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.685927 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.685980 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.685992 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.686016 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.686040 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:53Z","lastTransitionTime":"2026-02-16T13:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.789237 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.789306 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.789317 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.789333 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.789345 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:53Z","lastTransitionTime":"2026-02-16T13:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.878761 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.878811 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:32:53 crc kubenswrapper[4812]: E0216 13:32:53.878918 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:32:53 crc kubenswrapper[4812]: E0216 13:32:53.879002 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.883517 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 09:45:03.174242707 +0000 UTC Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.891620 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.891649 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.891658 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.891669 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.891679 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:53Z","lastTransitionTime":"2026-02-16T13:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.994865 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.994965 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.994990 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.995021 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:53 crc kubenswrapper[4812]: I0216 13:32:53.995045 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:53Z","lastTransitionTime":"2026-02-16T13:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.097304 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.097385 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.097395 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.097408 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.097417 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:54Z","lastTransitionTime":"2026-02-16T13:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.199655 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.199698 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.199708 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.199725 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.199737 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:54Z","lastTransitionTime":"2026-02-16T13:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.302262 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.302296 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.302307 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.302320 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.302330 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:54Z","lastTransitionTime":"2026-02-16T13:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.405191 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.405256 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.405275 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.405298 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.405317 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:54Z","lastTransitionTime":"2026-02-16T13:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.508189 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.508241 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.508260 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.508284 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.508301 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:54Z","lastTransitionTime":"2026-02-16T13:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.611130 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.611189 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.611199 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.611213 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.611222 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:54Z","lastTransitionTime":"2026-02-16T13:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.713829 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.713871 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.713883 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.713900 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.713913 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:54Z","lastTransitionTime":"2026-02-16T13:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.816877 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.816930 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.816955 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.816977 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.816992 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:54Z","lastTransitionTime":"2026-02-16T13:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.878842 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.878877 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:32:54 crc kubenswrapper[4812]: E0216 13:32:54.879076 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:32:54 crc kubenswrapper[4812]: E0216 13:32:54.879241 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.883806 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 03:37:38.656450933 +0000 UTC Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.920689 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.920743 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.920756 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.920779 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:54 crc kubenswrapper[4812]: I0216 13:32:54.920792 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:54Z","lastTransitionTime":"2026-02-16T13:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.022555 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.022597 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.022607 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.022623 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.022633 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:55Z","lastTransitionTime":"2026-02-16T13:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.126099 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.126132 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.126140 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.126154 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.126163 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:55Z","lastTransitionTime":"2026-02-16T13:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.229037 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.229079 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.229088 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.229101 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.229109 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:55Z","lastTransitionTime":"2026-02-16T13:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.331348 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.331389 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.331397 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.331412 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.331421 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:55Z","lastTransitionTime":"2026-02-16T13:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.433798 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.433838 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.433852 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.433866 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.433876 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:55Z","lastTransitionTime":"2026-02-16T13:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.536898 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.536975 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.536992 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.537016 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.537033 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:55Z","lastTransitionTime":"2026-02-16T13:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.639813 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.639884 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.639896 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.639912 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.639923 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:55Z","lastTransitionTime":"2026-02-16T13:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.743178 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.743220 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.743231 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.743247 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.743256 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:55Z","lastTransitionTime":"2026-02-16T13:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.845756 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.845791 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.845800 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.845813 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.845821 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:55Z","lastTransitionTime":"2026-02-16T13:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.878275 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.878374 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:32:55 crc kubenswrapper[4812]: E0216 13:32:55.878533 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:32:55 crc kubenswrapper[4812]: E0216 13:32:55.878943 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.879294 4812 scope.go:117] "RemoveContainer" containerID="c46752e9b7b26727ab1a278c44c87ac6fc14d7f5fda7f3332d47815dbae3ec19" Feb 16 13:32:55 crc kubenswrapper[4812]: E0216 13:32:55.879506 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pzksg_openshift-ovn-kubernetes(a67ca714-af04-4a76-8a28-54d47f66b1fa)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.885036 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 05:12:46.433684633 +0000 UTC Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.948734 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.948778 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.948796 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.948816 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:55 crc kubenswrapper[4812]: I0216 13:32:55.948831 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:55Z","lastTransitionTime":"2026-02-16T13:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.051632 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.051715 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.051729 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.051749 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.051790 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:56Z","lastTransitionTime":"2026-02-16T13:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.155295 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.155320 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.155328 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.155341 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.155350 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:56Z","lastTransitionTime":"2026-02-16T13:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.257579 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.257628 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.257638 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.257652 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.257663 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:56Z","lastTransitionTime":"2026-02-16T13:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.360090 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.360147 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.360160 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.360178 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.360190 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:56Z","lastTransitionTime":"2026-02-16T13:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.462574 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.462614 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.462625 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.462645 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.462656 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:56Z","lastTransitionTime":"2026-02-16T13:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.564896 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.564933 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.564944 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.564960 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.564972 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:56Z","lastTransitionTime":"2026-02-16T13:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.667692 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.667741 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.667757 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.667779 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.667789 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:56Z","lastTransitionTime":"2026-02-16T13:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.770180 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.770222 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.770230 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.770244 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.770253 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:56Z","lastTransitionTime":"2026-02-16T13:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.872608 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.872652 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.872695 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.872713 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.872725 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:56Z","lastTransitionTime":"2026-02-16T13:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.877991 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.878008 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:32:56 crc kubenswrapper[4812]: E0216 13:32:56.878096 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:32:56 crc kubenswrapper[4812]: E0216 13:32:56.878189 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.886048 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 17:50:57.874607084 +0000 UTC Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.974800 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.974860 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.974872 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.974889 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:56 crc kubenswrapper[4812]: I0216 13:32:56.974901 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:56Z","lastTransitionTime":"2026-02-16T13:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.077638 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.077711 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.077727 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.077745 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.077773 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:57Z","lastTransitionTime":"2026-02-16T13:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.182399 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.182435 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.182463 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.182479 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.182496 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:57Z","lastTransitionTime":"2026-02-16T13:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.285007 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.285043 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.285052 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.285065 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.285074 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:57Z","lastTransitionTime":"2026-02-16T13:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.387640 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.387715 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.387729 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.387752 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.387769 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:57Z","lastTransitionTime":"2026-02-16T13:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.490162 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.490208 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.490218 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.490233 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.490244 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:57Z","lastTransitionTime":"2026-02-16T13:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.592971 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.593017 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.593031 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.593048 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.593058 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:57Z","lastTransitionTime":"2026-02-16T13:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.695170 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.695213 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.695225 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.695241 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.695252 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:57Z","lastTransitionTime":"2026-02-16T13:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.798165 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.798221 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.798289 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.798308 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.798319 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:57Z","lastTransitionTime":"2026-02-16T13:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.879005 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.879053 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:32:57 crc kubenswrapper[4812]: E0216 13:32:57.879137 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:32:57 crc kubenswrapper[4812]: E0216 13:32:57.879280 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.886344 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 10:33:21.892096697 +0000 UTC Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.901325 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.901388 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.901400 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.901413 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:57 crc kubenswrapper[4812]: I0216 13:32:57.901425 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:57Z","lastTransitionTime":"2026-02-16T13:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.003635 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.003660 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.003669 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.003681 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.003689 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:58Z","lastTransitionTime":"2026-02-16T13:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.106197 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.106235 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.106245 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.106260 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.106269 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:58Z","lastTransitionTime":"2026-02-16T13:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.209144 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.209185 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.209205 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.209219 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.209229 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:58Z","lastTransitionTime":"2026-02-16T13:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.311897 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.311949 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.311960 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.311983 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.311995 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:58Z","lastTransitionTime":"2026-02-16T13:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.414325 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.414366 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.414377 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.414395 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.414407 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:58Z","lastTransitionTime":"2026-02-16T13:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.516779 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.516839 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.516849 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.516866 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.516876 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:58Z","lastTransitionTime":"2026-02-16T13:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.619142 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.619181 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.619189 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.619205 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.619215 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:58Z","lastTransitionTime":"2026-02-16T13:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.721529 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.721568 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.721576 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.721593 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.721607 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:58Z","lastTransitionTime":"2026-02-16T13:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.823675 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.823718 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.823727 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.823743 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.823753 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:58Z","lastTransitionTime":"2026-02-16T13:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.878248 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.878255 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:32:58 crc kubenswrapper[4812]: E0216 13:32:58.878401 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:32:58 crc kubenswrapper[4812]: E0216 13:32:58.878498 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.887378 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 15:15:24.671323768 +0000 UTC Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.926672 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.926710 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.926719 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.926734 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:58 crc kubenswrapper[4812]: I0216 13:32:58.926744 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:58Z","lastTransitionTime":"2026-02-16T13:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.029342 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.029391 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.029402 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.029417 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.029425 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:59Z","lastTransitionTime":"2026-02-16T13:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.132108 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.132206 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.132224 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.132249 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.132264 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:59Z","lastTransitionTime":"2026-02-16T13:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.185203 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.185250 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.185259 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.185274 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.185289 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:59Z","lastTransitionTime":"2026-02-16T13:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:59 crc kubenswrapper[4812]: E0216 13:32:59.198043 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4981c762-995f-430f-ab9d-bca26618d78a\\\",\\\"systemUUID\\\":\\\"a8093dd5-8447-4cfc-ac6f-47d191141ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:59Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.203325 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.203369 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.203395 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.203409 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.203417 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:59Z","lastTransitionTime":"2026-02-16T13:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:59 crc kubenswrapper[4812]: E0216 13:32:59.218315 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4981c762-995f-430f-ab9d-bca26618d78a\\\",\\\"systemUUID\\\":\\\"a8093dd5-8447-4cfc-ac6f-47d191141ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:59Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.223824 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.223881 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.223891 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.223907 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.223917 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:59Z","lastTransitionTime":"2026-02-16T13:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:59 crc kubenswrapper[4812]: E0216 13:32:59.236391 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4981c762-995f-430f-ab9d-bca26618d78a\\\",\\\"systemUUID\\\":\\\"a8093dd5-8447-4cfc-ac6f-47d191141ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:59Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.239674 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.239722 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.239731 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.239746 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.239756 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:59Z","lastTransitionTime":"2026-02-16T13:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:59 crc kubenswrapper[4812]: E0216 13:32:59.250680 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4981c762-995f-430f-ab9d-bca26618d78a\\\",\\\"systemUUID\\\":\\\"a8093dd5-8447-4cfc-ac6f-47d191141ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:59Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.253799 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.253838 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.253848 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.253864 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.253874 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:59Z","lastTransitionTime":"2026-02-16T13:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:59 crc kubenswrapper[4812]: E0216 13:32:59.263302 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:32:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4981c762-995f-430f-ab9d-bca26618d78a\\\",\\\"systemUUID\\\":\\\"a8093dd5-8447-4cfc-ac6f-47d191141ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:59Z is after 2025-08-24T17:21:41Z" Feb 16 13:32:59 crc kubenswrapper[4812]: E0216 13:32:59.263415 4812 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.264908 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.264946 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.264956 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.264969 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.264977 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:59Z","lastTransitionTime":"2026-02-16T13:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.367074 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.367115 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.367124 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.367137 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.367147 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:59Z","lastTransitionTime":"2026-02-16T13:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.469593 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.469628 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.469639 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.469655 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.469665 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:59Z","lastTransitionTime":"2026-02-16T13:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.571657 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.571709 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.571725 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.571743 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.571756 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:59Z","lastTransitionTime":"2026-02-16T13:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.673843 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.673876 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.673885 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.673902 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.673914 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:59Z","lastTransitionTime":"2026-02-16T13:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.776175 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.776213 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.776225 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.776239 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.776248 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:59Z","lastTransitionTime":"2026-02-16T13:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.851880 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d2a1f0c6-cafa-4c67-a2ad-d6003e88613c-metrics-certs\") pod \"network-metrics-daemon-szt79\" (UID: \"d2a1f0c6-cafa-4c67-a2ad-d6003e88613c\") " pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:32:59 crc kubenswrapper[4812]: E0216 13:32:59.851994 4812 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 13:32:59 crc kubenswrapper[4812]: E0216 13:32:59.852075 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2a1f0c6-cafa-4c67-a2ad-d6003e88613c-metrics-certs podName:d2a1f0c6-cafa-4c67-a2ad-d6003e88613c nodeName:}" failed. No retries permitted until 2026-02-16 13:33:31.852059637 +0000 UTC m=+100.916390328 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d2a1f0c6-cafa-4c67-a2ad-d6003e88613c-metrics-certs") pod "network-metrics-daemon-szt79" (UID: "d2a1f0c6-cafa-4c67-a2ad-d6003e88613c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.878049 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.878077 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:32:59 crc kubenswrapper[4812]: E0216 13:32:59.878174 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:32:59 crc kubenswrapper[4812]: E0216 13:32:59.878329 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.878481 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.878509 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.878523 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.878538 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.878550 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:59Z","lastTransitionTime":"2026-02-16T13:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.888061 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 19:45:05.987432781 +0000 UTC Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.980143 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.980176 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.980188 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.980204 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:32:59 crc kubenswrapper[4812]: I0216 13:32:59.980216 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:32:59Z","lastTransitionTime":"2026-02-16T13:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.082562 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.082595 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.082623 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.082666 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.082676 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:00Z","lastTransitionTime":"2026-02-16T13:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.185238 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.185305 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.185316 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.185334 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.185345 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:00Z","lastTransitionTime":"2026-02-16T13:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.287612 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.287649 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.287659 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.287675 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.287685 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:00Z","lastTransitionTime":"2026-02-16T13:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.390205 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.390272 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.390285 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.390305 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.390325 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:00Z","lastTransitionTime":"2026-02-16T13:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.493124 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.493168 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.493180 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.493198 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.493209 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:00Z","lastTransitionTime":"2026-02-16T13:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.595613 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.595645 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.595655 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.595669 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.595679 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:00Z","lastTransitionTime":"2026-02-16T13:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.698193 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.698243 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.698256 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.698273 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.698285 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:00Z","lastTransitionTime":"2026-02-16T13:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.808707 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.808736 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.808745 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.808757 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.808767 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:00Z","lastTransitionTime":"2026-02-16T13:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.878456 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.878477 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:33:00 crc kubenswrapper[4812]: E0216 13:33:00.878611 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:33:00 crc kubenswrapper[4812]: E0216 13:33:00.878720 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.888496 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 11:21:58.858981344 +0000 UTC Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.910429 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.910487 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.910498 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.910514 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:00 crc kubenswrapper[4812]: I0216 13:33:00.910524 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:00Z","lastTransitionTime":"2026-02-16T13:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.012251 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.012285 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.012293 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.012306 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.012314 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:01Z","lastTransitionTime":"2026-02-16T13:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.113771 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.113806 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.113816 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.113830 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.113838 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:01Z","lastTransitionTime":"2026-02-16T13:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.217016 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.217064 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.217076 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.217093 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.217104 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:01Z","lastTransitionTime":"2026-02-16T13:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.319510 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.319549 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.319560 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.319574 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.319585 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:01Z","lastTransitionTime":"2026-02-16T13:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.421628 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.421701 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.421725 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.421755 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.421778 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:01Z","lastTransitionTime":"2026-02-16T13:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.523874 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.523922 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.523931 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.523947 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.523956 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:01Z","lastTransitionTime":"2026-02-16T13:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.626407 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.626457 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.626467 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.626483 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.626494 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:01Z","lastTransitionTime":"2026-02-16T13:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.729196 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.729244 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.729255 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.729273 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.729286 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:01Z","lastTransitionTime":"2026-02-16T13:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.831513 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.831593 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.831607 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.831625 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.831636 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:01Z","lastTransitionTime":"2026-02-16T13:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.878122 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:33:01 crc kubenswrapper[4812]: E0216 13:33:01.878271 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.878288 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:33:01 crc kubenswrapper[4812]: E0216 13:33:01.878432 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.889301 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 21:32:35.811733197 +0000 UTC Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.892651 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4574e2db-75d7-4da6-bdf8-84a06c617799\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85905db3e100d71dfb29420eccfd9a129be4b9a6950a8e5e2915d7f8aabcc255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858f53f244902f66ee53409db591138aba707c545b1f7cc0da69a691be1e2138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3e9624fe4d351638e9b45a1d575c06a3c9e7e12a77dcd8cb6a61996fe51fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cd9201d7dd9b07b8b725890b21f0f06ee3dd3ff839e3fc6c2f4b9d4d736c01e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cd9201d7dd9b07b8b725890b21f0f06ee3dd3ff839e3fc6c2f4b9d4d736c01e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:01Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.906398 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:01Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.920734 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5w4kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f07f8fe-99f2-4f2e-b9f8-56841d756064\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6394bf9fe0ec8408dd8bfb0105dcc686f3a820850e75ce47db441f04d22c1c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5w4kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:01Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.935416 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.935495 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.935507 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.935569 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.935581 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:01Z","lastTransitionTime":"2026-02-16T13:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.939029 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67ca714-af04-4a76-8a28-54d47f66b1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c46752e9b7b26727ab1a278c44c87ac6fc14d7f5fda7f3332d47815dbae3ec19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c46752e9b7b26727ab1a278c44c87ac6fc14d7f5fda7f3332d47815dbae3ec19\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T13:32:40Z\\\",\\\"message\\\":\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-diagnostics/network-check-target\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.219\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0216 13:32:40.612720 6488 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:40Z \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pzksg_openshift-ovn-kubernetes(a67ca714-af04-4a76-8a28-54d47f66b1fa)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzksg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:01Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.950098 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-szt79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a1f0c6-cafa-4c67-a2ad-d6003e88613c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md4ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md4ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-szt79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:01Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.965433 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b29081-a34f-4671-85a3-e1bc2b16d37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f92165c7178abd9e054861a9eb371353d050b922ba46c2a29027462832abff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 13:32:11.932704 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 13:32:11.933194 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 13:32:11.934094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1719573121/tls.crt::/tmp/serving-cert-1719573121/tls.key\\\\\\\"\\\\nI0216 13:32:12.377062 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 13:32:12.379783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 13:32:12.379801 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 13:32:12.379819 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 13:32:12.379825 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 13:32:12.383118 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 13:32:12.383163 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 13:32:12.383168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 13:32:12.383182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 13:32:12.383185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 13:32:12.383188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 13:32:12.384888 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:01Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.977249 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78ca0949b586a8839b02c0e01bd92571f3c4eaa156cfcd58546607c30cdbdcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:01Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:01 crc kubenswrapper[4812]: I0216 13:33:01.989810 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://367d830f42471f794953f9aa4cb979d9c3dac3ef0f2364661058c50cc1b0d055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f6091f13d04f859b29636375d3a27e07eeaee338f0da4a70b471f7f26250e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:01Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.002166 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:02Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.010030 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p9b2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a86c16e138e436ef3976c554ab3d70e39a7515f662e305558513a633db53142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2tjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p9b2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:02Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.021105 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:02Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.032195 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2hhp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"934e533e-cc26-4770-af67-3dbcaa0dab5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a9a856d7be88b316dbac827e3998da5df55019101f11f2e6e7d763a78c4a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2xc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2hhp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:02Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.038067 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.038112 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.038125 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.038143 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.038156 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:02Z","lastTransitionTime":"2026-02-16T13:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.041855 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c55e49a-a30d-4950-a690-c33d9f8a31e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff320daab8e42aacfbe2cb2ef91f064cfd950840978ca9aaf3e11ea987dc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe6225f5f96d52b832cfbd8ecb30748dfcd053b9d95a512bfbe0c6c4aa7c0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6mn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:02Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.053032 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f43b73-bad3-4883-ad0b-4a8df6824248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1001938f74f193628f1faf5fada6a26836e7053b9f46656912df4c17ac11a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5304f578aa4297d0e9e57669cb571d178fada6972553487f8f3e6a6a2422175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b778bd4c0e3d5d5158269f30b9646da76918d07d5a79e274b0b70ad1b4f38623\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94212f31a24fe0258d7c5241438e6ec1daf60b49bb87365fa851ae0a5628de1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:02Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.064893 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc49bb6ecf7f6197ff2e3e76e7e51bb0842fec86477b85c62d1930170ac91d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:02Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.077696 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"145eec20-9328-4b99-b0ec-4870b6761385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6366360c4a0ce6a9882aa517e034c45ef65a6ae8bf19f797f3ddaa99b8a76469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8g94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:02Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.088332 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gt4zb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e823c28d-cc96-469c-a794-fb12a7ae6172\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d34a9809d2ade6a3b1cb69520313bd3886db5c666762ca9ce993b5251331fa7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms9qd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f8618e97cfc7a7d6673243425c62cfd9633fca65005e96b63e924c303f7e5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms9qd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gt4zb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:02Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.140330 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.140354 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.140364 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.140378 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.140390 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:02Z","lastTransitionTime":"2026-02-16T13:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.243176 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.243239 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.243262 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.243301 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.243320 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:02Z","lastTransitionTime":"2026-02-16T13:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.342554 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2hhp5_934e533e-cc26-4770-af67-3dbcaa0dab5b/kube-multus/0.log" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.342631 4812 generic.go:334] "Generic (PLEG): container finished" podID="934e533e-cc26-4770-af67-3dbcaa0dab5b" containerID="8a9a856d7be88b316dbac827e3998da5df55019101f11f2e6e7d763a78c4a80b" exitCode=1 Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.342675 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2hhp5" event={"ID":"934e533e-cc26-4770-af67-3dbcaa0dab5b","Type":"ContainerDied","Data":"8a9a856d7be88b316dbac827e3998da5df55019101f11f2e6e7d763a78c4a80b"} Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.343201 4812 scope.go:117] "RemoveContainer" containerID="8a9a856d7be88b316dbac827e3998da5df55019101f11f2e6e7d763a78c4a80b" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.346754 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.346787 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.346796 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.346810 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.346822 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:02Z","lastTransitionTime":"2026-02-16T13:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.361765 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gt4zb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e823c28d-cc96-469c-a794-fb12a7ae6172\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d34a9809d2ade6a3b1cb69520313bd3886db5c666762ca9ce993b5251331fa7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms9qd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f8618e97cfc7a7d6673243425c62cfd9633fca65005e96b63e924c303f7e5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms9qd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gt4zb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:02Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.380117 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f43b73-bad3-4883-ad0b-4a8df6824248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1001938f74f193628f1faf5fada6a26836e7053b9f46656912df4c17ac11a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5304f578aa4297d0e9e57669cb571d178fada6972553487f8f3e6a6a2422175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b778bd4c0e3d5d5158269f30b9646da76918d07d5a79e274b0b70ad1b4f38623\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94212f31a24fe0258d7c5241438e6ec1daf60b49bb87365fa851ae0a5628de1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:02Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.395051 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc49bb6ecf7f6197ff2e3e76e7e51bb0842fec86477b85c62d1930170ac91d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:02Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.409928 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"145eec20-9328-4b99-b0ec-4870b6761385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6366360c4a0ce6a9882aa517e034c45ef65a6ae8bf19f797f3ddaa99b8a76469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8g94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:02Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.422757 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4574e2db-75d7-4da6-bdf8-84a06c617799\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85905db3e100d71dfb29420eccfd9a129be4b9a6950a8e5e2915d7f8aabcc255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858f53f244902f66ee53409db591138aba707c545b1f7cc0da69a691be1e2138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3e9624fe4d351638e9b45a1d575c06a3c9e7e12a77dcd8cb6a61996fe51fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cd9201d7dd9b07b8b725890b21f0f06ee3dd3ff839e3fc6c2f4b9d4d736c01e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cd9201d7dd9b07b8b725890b21f0f06ee3dd3ff839e3fc6c2f4b9d4d736c01e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:02Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.435821 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:02Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.449231 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.449261 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.449271 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.449285 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.449296 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:02Z","lastTransitionTime":"2026-02-16T13:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.450183 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5w4kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f07f8fe-99f2-4f2e-b9f8-56841d756064\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6394bf9fe0ec8408dd8bfb0105dcc686f3a820850e75ce47db441f04d22c1c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5w4kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:02Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.464326 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:02Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.476380 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p9b2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a86c16e138e436ef3976c554ab3d70e39a7515f662e305558513a633db53142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2tjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p9b2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:02Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.496482 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67ca714-af04-4a76-8a28-54d47f66b1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c46752e9b7b26727ab1a278c44c87ac6fc14d7f5fda7f3332d47815dbae3ec19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c46752e9b7b26727ab1a278c44c87ac6fc14d7f5fda7f3332d47815dbae3ec19\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T13:32:40Z\\\",\\\"message\\\":\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-diagnostics/network-check-target\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.219\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0216 13:32:40.612720 6488 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:40Z \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pzksg_openshift-ovn-kubernetes(a67ca714-af04-4a76-8a28-54d47f66b1fa)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzksg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:02Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.506825 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-szt79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a1f0c6-cafa-4c67-a2ad-d6003e88613c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md4ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md4ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-szt79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:02Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.519704 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b29081-a34f-4671-85a3-e1bc2b16d37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f92165c7178abd9e054861a9eb371353d050b922ba46c2a29027462832abff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 13:32:11.932704 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 13:32:11.933194 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 13:32:11.934094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1719573121/tls.crt::/tmp/serving-cert-1719573121/tls.key\\\\\\\"\\\\nI0216 13:32:12.377062 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 13:32:12.379783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 13:32:12.379801 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 13:32:12.379819 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 13:32:12.379825 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 13:32:12.383118 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 13:32:12.383163 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 13:32:12.383168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 13:32:12.383182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 13:32:12.383185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 13:32:12.383188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 13:32:12.384888 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:02Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.529610 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78ca0949b586a8839b02c0e01bd92571f3c4eaa156cfcd58546607c30cdbdcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:02Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.540657 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://367d830f42471f794953f9aa4cb979d9c3dac3ef0f2364661058c50cc1b0d055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f6091f13d04f859b29636375d3a27e07eeaee338f0da4a70b471f7f26250e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:02Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.551862 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.552098 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.552135 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.552168 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.552181 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:02Z","lastTransitionTime":"2026-02-16T13:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.554335 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:02Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.570387 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2hhp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"934e533e-cc26-4770-af67-3dbcaa0dab5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a9a856d7be88b316dbac827e3998da5df55019101f11f2e6e7d763a78c4a80b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a9a856d7be88b316dbac827e3998da5df55019101f11f2e6e7d763a78c4a80b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T13:33:01Z\\\",\\\"message\\\":\\\"2026-02-16T13:32:16+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_30129742-2774-4438-ab09-c8d07b65190b\\\\n2026-02-16T13:32:16+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_30129742-2774-4438-ab09-c8d07b65190b to /host/opt/cni/bin/\\\\n2026-02-16T13:32:16Z [verbose] multus-daemon started\\\\n2026-02-16T13:32:16Z [verbose] Readiness Indicator file check\\\\n2026-02-16T13:33:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2xc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2hhp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:02Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.583681 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c55e49a-a30d-4950-a690-c33d9f8a31e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff320daab8e42aacfbe2cb2ef91f064cfd950840978ca9aaf3e11ea987dc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe6225f5f96d52b832cfbd8ecb30748dfcd053b9d95a512bfbe0c6c4aa7c0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6mn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:02Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.655095 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.655461 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.655480 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.655499 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.655512 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:02Z","lastTransitionTime":"2026-02-16T13:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.758491 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.758544 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.758561 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.758582 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.758597 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:02Z","lastTransitionTime":"2026-02-16T13:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.861333 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.861391 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.861405 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.861427 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.861462 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:02Z","lastTransitionTime":"2026-02-16T13:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.878675 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:33:02 crc kubenswrapper[4812]: E0216 13:33:02.878839 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.878877 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:33:02 crc kubenswrapper[4812]: E0216 13:33:02.878958 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.889955 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 10:26:51.071231968 +0000 UTC Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.963758 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.963820 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.963831 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.963850 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:02 crc kubenswrapper[4812]: I0216 13:33:02.963862 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:02Z","lastTransitionTime":"2026-02-16T13:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.066464 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.066509 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.066522 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.066541 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.066555 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:03Z","lastTransitionTime":"2026-02-16T13:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.169593 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.169668 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.169684 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.169704 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.169725 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:03Z","lastTransitionTime":"2026-02-16T13:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.273588 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.273634 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.273644 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.273666 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.273678 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:03Z","lastTransitionTime":"2026-02-16T13:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.350483 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2hhp5_934e533e-cc26-4770-af67-3dbcaa0dab5b/kube-multus/0.log" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.350552 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2hhp5" event={"ID":"934e533e-cc26-4770-af67-3dbcaa0dab5b","Type":"ContainerStarted","Data":"63cd21781b359c2f631527e3e4e2649264ee4e64c5d3cc9842fb9106017565c9"} Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.364813 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:03Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.376227 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.376470 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.376595 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.376684 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.376772 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:03Z","lastTransitionTime":"2026-02-16T13:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.378843 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2hhp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"934e533e-cc26-4770-af67-3dbcaa0dab5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cd21781b359c2f631527e3e4e2649264ee4e64c5d3cc9842fb9106017565c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a9a856d7be88b316dbac827e3998da5df55019101f11f2e6e7d763a78c4a80b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T13:33:01Z\\\",\\\"message\\\":\\\"2026-02-16T13:32:16+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_30129742-2774-4438-ab09-c8d07b65190b\\\\n2026-02-16T13:32:16+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_30129742-2774-4438-ab09-c8d07b65190b to /host/opt/cni/bin/\\\\n2026-02-16T13:32:16Z [verbose] multus-daemon started\\\\n2026-02-16T13:32:16Z [verbose] Readiness Indicator file check\\\\n2026-02-16T13:33:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:33:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2xc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2hhp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:03Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.393145 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c55e49a-a30d-4950-a690-c33d9f8a31e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff320daab8e42aacfbe2cb2ef91f064cfd950840978ca9aaf3e11ea987dc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe6225f5f96d52b832cfbd8ecb30748dfcd053b9d95a512bfbe0c6c4aa7c0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6mn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:03Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.406114 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f43b73-bad3-4883-ad0b-4a8df6824248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1001938f74f193628f1faf5fada6a26836e7053b9f46656912df4c17ac11a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5304f578aa4297d0e9e57669cb571d178fada6972553487f8f3e6a6a2422175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b778bd4c0e3d5d5158269f30b9646da76918d07d5a79e274b0b70ad1b4f38623\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94212f31a24fe0258d7c5241438e6ec1daf60b49bb87365fa851ae0a5628de1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:03Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.419872 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc49bb6ecf7f6197ff2e3e76e7e51bb0842fec86477b85c62d1930170ac91d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:03Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.433929 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"145eec20-9328-4b99-b0ec-4870b6761385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6366360c4a0ce6a9882aa517e034c45ef65a6ae8bf19f797f3ddaa99b8a76469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8g94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:03Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.445860 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gt4zb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e823c28d-cc96-469c-a794-fb12a7ae6172\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d34a9809d2ade6a3b1cb69520313bd3886db5c666762ca9ce993b5251331fa7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms9qd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f8618e97cfc7a7d6673243425c62cfd9633fca65005e96b63e924c303f7e5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms9qd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gt4zb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:03Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.457611 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4574e2db-75d7-4da6-bdf8-84a06c617799\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85905db3e100d71dfb29420eccfd9a129be4b9a6950a8e5e2915d7f8aabcc255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858f53f244902f66ee53409db591138aba707c545b1f7cc0da69a691be1e2138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3e9624fe4d351638e9b45a1d575c06a3c9e7e12a77dcd8cb6a61996fe51fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cd9201d7dd9b07b8b725890b21f0f06ee3dd3ff839e3fc6c2f4b9d4d736c01e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cd9201d7dd9b07b8b725890b21f0f06ee3dd3ff839e3fc6c2f4b9d4d736c01e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:03Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.470318 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:03Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.479088 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.479223 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.479289 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.479348 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.479401 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:03Z","lastTransitionTime":"2026-02-16T13:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.480378 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5w4kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f07f8fe-99f2-4f2e-b9f8-56841d756064\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6394bf9fe0ec8408dd8bfb0105dcc686f3a820850e75ce47db441f04d22c1c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5w4kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:03Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.491497 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-szt79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a1f0c6-cafa-4c67-a2ad-d6003e88613c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md4ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md4ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-szt79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:03Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.506427 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b29081-a34f-4671-85a3-e1bc2b16d37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f92165c7178abd9e054861a9eb371353d050b922ba46c2a29027462832abff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 13:32:11.932704 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 13:32:11.933194 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 13:32:11.934094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1719573121/tls.crt::/tmp/serving-cert-1719573121/tls.key\\\\\\\"\\\\nI0216 13:32:12.377062 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 13:32:12.379783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 13:32:12.379801 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 13:32:12.379819 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 13:32:12.379825 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 13:32:12.383118 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 13:32:12.383163 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 13:32:12.383168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 13:32:12.383182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 13:32:12.383185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 13:32:12.383188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 13:32:12.384888 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:03Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.520738 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78ca0949b586a8839b02c0e01bd92571f3c4eaa156cfcd58546607c30cdbdcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:03Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.533004 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://367d830f42471f794953f9aa4cb979d9c3dac3ef0f2364661058c50cc1b0d055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f6091f13d04f859b29636375d3a27e07eeaee338f0da4a70b471f7f26250e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:03Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.546966 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:03Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.556589 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p9b2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a86c16e138e436ef3976c554ab3d70e39a7515f662e305558513a633db53142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2tjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p9b2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:03Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.574121 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67ca714-af04-4a76-8a28-54d47f66b1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c46752e9b7b26727ab1a278c44c87ac6fc14d7f5fda7f3332d47815dbae3ec19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c46752e9b7b26727ab1a278c44c87ac6fc14d7f5fda7f3332d47815dbae3ec19\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T13:32:40Z\\\",\\\"message\\\":\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-diagnostics/network-check-target\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.219\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0216 13:32:40.612720 6488 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:40Z \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pzksg_openshift-ovn-kubernetes(a67ca714-af04-4a76-8a28-54d47f66b1fa)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzksg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:03Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.581689 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.581745 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.581756 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.581776 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.581792 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:03Z","lastTransitionTime":"2026-02-16T13:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.684785 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.685053 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.685121 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.685193 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.685265 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:03Z","lastTransitionTime":"2026-02-16T13:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.787643 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.787682 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.787693 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.787708 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.787719 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:03Z","lastTransitionTime":"2026-02-16T13:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.879013 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.879053 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:33:03 crc kubenswrapper[4812]: E0216 13:33:03.879157 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:33:03 crc kubenswrapper[4812]: E0216 13:33:03.879249 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.890074 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 08:36:44.882355465 +0000 UTC Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.890648 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.890693 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.890704 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.890722 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.890734 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:03Z","lastTransitionTime":"2026-02-16T13:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.992939 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.992986 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.992996 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.993013 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:03 crc kubenswrapper[4812]: I0216 13:33:03.993025 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:03Z","lastTransitionTime":"2026-02-16T13:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.096231 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.096305 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.096318 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.096335 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.096346 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:04Z","lastTransitionTime":"2026-02-16T13:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.199018 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.199053 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.199062 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.199079 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.199092 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:04Z","lastTransitionTime":"2026-02-16T13:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.302318 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.302396 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.302410 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.302427 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.302452 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:04Z","lastTransitionTime":"2026-02-16T13:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.404709 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.404749 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.404760 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.404775 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.404785 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:04Z","lastTransitionTime":"2026-02-16T13:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.507138 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.507195 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.507207 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.507225 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.507236 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:04Z","lastTransitionTime":"2026-02-16T13:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.609839 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.609881 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.609891 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.609910 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.609923 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:04Z","lastTransitionTime":"2026-02-16T13:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.712604 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.712651 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.712661 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.712686 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.712695 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:04Z","lastTransitionTime":"2026-02-16T13:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.815395 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.815456 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.815467 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.815484 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.815496 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:04Z","lastTransitionTime":"2026-02-16T13:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.878434 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.878585 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:33:04 crc kubenswrapper[4812]: E0216 13:33:04.878649 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:33:04 crc kubenswrapper[4812]: E0216 13:33:04.878784 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.890305 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 04:28:05.997021574 +0000 UTC Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.894056 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.918374 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.918419 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.918431 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.918469 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:04 crc kubenswrapper[4812]: I0216 13:33:04.918481 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:04Z","lastTransitionTime":"2026-02-16T13:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.021166 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.021195 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.021203 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.021216 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.021226 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:05Z","lastTransitionTime":"2026-02-16T13:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.123540 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.123616 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.123635 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.123650 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.123659 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:05Z","lastTransitionTime":"2026-02-16T13:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.225261 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.225321 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.225334 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.225354 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.225368 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:05Z","lastTransitionTime":"2026-02-16T13:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.328599 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.328675 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.328694 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.328722 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.328739 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:05Z","lastTransitionTime":"2026-02-16T13:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.431783 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.431823 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.431832 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.431845 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.431855 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:05Z","lastTransitionTime":"2026-02-16T13:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.534714 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.534771 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.534783 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.534801 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.534815 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:05Z","lastTransitionTime":"2026-02-16T13:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.638127 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.638185 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.638198 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.638217 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.638229 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:05Z","lastTransitionTime":"2026-02-16T13:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.740800 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.740875 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.740894 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.740916 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.740932 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:05Z","lastTransitionTime":"2026-02-16T13:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.843548 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.843611 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.843626 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.843647 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.843662 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:05Z","lastTransitionTime":"2026-02-16T13:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.878232 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.878269 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:33:05 crc kubenswrapper[4812]: E0216 13:33:05.878455 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:33:05 crc kubenswrapper[4812]: E0216 13:33:05.878575 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.891133 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 01:00:12.763379566 +0000 UTC Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.946326 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.946394 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.946404 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.946425 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:05 crc kubenswrapper[4812]: I0216 13:33:05.946438 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:05Z","lastTransitionTime":"2026-02-16T13:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.048934 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.048981 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.048990 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.049005 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.049016 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:06Z","lastTransitionTime":"2026-02-16T13:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.152598 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.152644 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.152656 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.152681 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.152697 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:06Z","lastTransitionTime":"2026-02-16T13:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.255017 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.255073 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.255086 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.255102 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.255113 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:06Z","lastTransitionTime":"2026-02-16T13:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.357566 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.357865 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.357935 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.358003 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.358103 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:06Z","lastTransitionTime":"2026-02-16T13:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.460807 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.460871 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.460889 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.460909 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.460946 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:06Z","lastTransitionTime":"2026-02-16T13:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.564804 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.564879 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.564889 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.564907 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.564919 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:06Z","lastTransitionTime":"2026-02-16T13:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.667028 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.667070 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.667081 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.667095 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.667105 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:06Z","lastTransitionTime":"2026-02-16T13:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.770331 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.770422 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.770439 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.770487 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.770599 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:06Z","lastTransitionTime":"2026-02-16T13:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.872591 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.872712 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.872732 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.872767 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.872787 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:06Z","lastTransitionTime":"2026-02-16T13:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.878789 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.878901 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:33:06 crc kubenswrapper[4812]: E0216 13:33:06.878992 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:33:06 crc kubenswrapper[4812]: E0216 13:33:06.879134 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.891929 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 03:34:49.550551785 +0000 UTC Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.975671 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.975728 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.975740 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.975756 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:06 crc kubenswrapper[4812]: I0216 13:33:06.975767 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:06Z","lastTransitionTime":"2026-02-16T13:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.077621 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.077669 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.077682 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.077697 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.077709 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:07Z","lastTransitionTime":"2026-02-16T13:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.180418 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.180499 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.180516 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.180538 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.180555 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:07Z","lastTransitionTime":"2026-02-16T13:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.283842 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.283881 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.283892 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.283919 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.283932 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:07Z","lastTransitionTime":"2026-02-16T13:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.387052 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.387094 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.387105 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.387119 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.387129 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:07Z","lastTransitionTime":"2026-02-16T13:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.489202 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.489238 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.489247 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.489260 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.489269 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:07Z","lastTransitionTime":"2026-02-16T13:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.592011 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.592246 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.592257 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.592274 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.592286 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:07Z","lastTransitionTime":"2026-02-16T13:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.694429 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.694519 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.694536 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.694561 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.694577 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:07Z","lastTransitionTime":"2026-02-16T13:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.797092 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.797140 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.797158 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.797176 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.797188 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:07Z","lastTransitionTime":"2026-02-16T13:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.878539 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.878659 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:33:07 crc kubenswrapper[4812]: E0216 13:33:07.878739 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:33:07 crc kubenswrapper[4812]: E0216 13:33:07.878918 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.892804 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 17:10:50.681537579 +0000 UTC Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.900268 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.900335 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.900359 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.900388 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:07 crc kubenswrapper[4812]: I0216 13:33:07.900411 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:07Z","lastTransitionTime":"2026-02-16T13:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.004729 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.004810 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.004827 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.004852 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.004873 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:08Z","lastTransitionTime":"2026-02-16T13:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.108296 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.108360 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.108377 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.108401 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.108417 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:08Z","lastTransitionTime":"2026-02-16T13:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.212768 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.212812 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.212823 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.212838 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.212848 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:08Z","lastTransitionTime":"2026-02-16T13:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.315428 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.315539 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.315565 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.315597 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.315619 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:08Z","lastTransitionTime":"2026-02-16T13:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.418148 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.418184 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.418193 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.418206 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.418215 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:08Z","lastTransitionTime":"2026-02-16T13:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.520971 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.521061 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.521073 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.521099 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.521113 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:08Z","lastTransitionTime":"2026-02-16T13:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.623768 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.623862 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.623892 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.623922 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.623947 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:08Z","lastTransitionTime":"2026-02-16T13:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.725542 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.725603 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.725621 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.725643 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.725660 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:08Z","lastTransitionTime":"2026-02-16T13:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.828229 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.828266 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.828275 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.828288 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.828298 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:08Z","lastTransitionTime":"2026-02-16T13:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.878144 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.878194 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:33:08 crc kubenswrapper[4812]: E0216 13:33:08.878303 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:33:08 crc kubenswrapper[4812]: E0216 13:33:08.878481 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.893592 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 10:04:58.770286554 +0000 UTC Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.929807 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.929896 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.929916 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.929938 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:08 crc kubenswrapper[4812]: I0216 13:33:08.929955 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:08Z","lastTransitionTime":"2026-02-16T13:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.033267 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.033323 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.033334 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.033350 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.033362 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:09Z","lastTransitionTime":"2026-02-16T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.135781 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.135840 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.135859 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.135884 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.135902 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:09Z","lastTransitionTime":"2026-02-16T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.237881 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.238189 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.238200 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.238214 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.238225 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:09Z","lastTransitionTime":"2026-02-16T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.341069 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.341108 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.341116 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.341128 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.341138 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:09Z","lastTransitionTime":"2026-02-16T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.443275 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.443345 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.443359 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.443383 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.443401 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:09Z","lastTransitionTime":"2026-02-16T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.527049 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.527110 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.527121 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.527142 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.527155 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:09Z","lastTransitionTime":"2026-02-16T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:09 crc kubenswrapper[4812]: E0216 13:33:09.541399 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:33:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:33:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:33:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:33:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4981c762-995f-430f-ab9d-bca26618d78a\\\",\\\"systemUUID\\\":\\\"a8093dd5-8447-4cfc-ac6f-47d191141ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:09Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.545940 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.545990 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.546020 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.546043 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.546055 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:09Z","lastTransitionTime":"2026-02-16T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:09 crc kubenswrapper[4812]: E0216 13:33:09.564612 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:33:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:33:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:33:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:33:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4981c762-995f-430f-ab9d-bca26618d78a\\\",\\\"systemUUID\\\":\\\"a8093dd5-8447-4cfc-ac6f-47d191141ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:09Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.569181 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.569226 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.569238 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.569256 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.569266 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:09Z","lastTransitionTime":"2026-02-16T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:09 crc kubenswrapper[4812]: E0216 13:33:09.584007 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:33:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:33:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:33:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:33:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4981c762-995f-430f-ab9d-bca26618d78a\\\",\\\"systemUUID\\\":\\\"a8093dd5-8447-4cfc-ac6f-47d191141ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:09Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.589627 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.589709 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.589721 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.589740 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.589752 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:09Z","lastTransitionTime":"2026-02-16T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:09 crc kubenswrapper[4812]: E0216 13:33:09.602920 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:33:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:33:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:33:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:33:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4981c762-995f-430f-ab9d-bca26618d78a\\\",\\\"systemUUID\\\":\\\"a8093dd5-8447-4cfc-ac6f-47d191141ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:09Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.606268 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.606297 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.606307 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.606322 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.606355 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:09Z","lastTransitionTime":"2026-02-16T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:09 crc kubenswrapper[4812]: E0216 13:33:09.619907 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:33:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:33:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:33:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:33:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4981c762-995f-430f-ab9d-bca26618d78a\\\",\\\"systemUUID\\\":\\\"a8093dd5-8447-4cfc-ac6f-47d191141ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:09Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:09 crc kubenswrapper[4812]: E0216 13:33:09.620028 4812 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.621694 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.621714 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.621721 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.621734 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.621743 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:09Z","lastTransitionTime":"2026-02-16T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.724041 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.724108 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.724118 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.724133 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.724144 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:09Z","lastTransitionTime":"2026-02-16T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.826893 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.826949 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.826958 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.826991 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.827002 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:09Z","lastTransitionTime":"2026-02-16T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.878301 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.878361 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:33:09 crc kubenswrapper[4812]: E0216 13:33:09.878484 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:33:09 crc kubenswrapper[4812]: E0216 13:33:09.878510 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.894378 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 07:11:30.528776606 +0000 UTC Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.929259 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.929311 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.929322 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.929337 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:09 crc kubenswrapper[4812]: I0216 13:33:09.929347 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:09Z","lastTransitionTime":"2026-02-16T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.032229 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.032284 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.032299 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.032315 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.032326 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:10Z","lastTransitionTime":"2026-02-16T13:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.139078 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.139130 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.139144 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.139161 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.139174 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:10Z","lastTransitionTime":"2026-02-16T13:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.242215 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.242271 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.242288 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.242311 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.242328 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:10Z","lastTransitionTime":"2026-02-16T13:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.344249 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.344288 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.344304 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.344324 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.344339 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:10Z","lastTransitionTime":"2026-02-16T13:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.447575 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.447617 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.447629 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.447645 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.447657 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:10Z","lastTransitionTime":"2026-02-16T13:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.549855 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.549897 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.549908 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.549923 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.549935 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:10Z","lastTransitionTime":"2026-02-16T13:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.652624 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.652666 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.652676 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.652693 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.652705 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:10Z","lastTransitionTime":"2026-02-16T13:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.755889 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.755932 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.755944 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.755960 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.755972 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:10Z","lastTransitionTime":"2026-02-16T13:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.859234 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.859295 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.859313 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.859339 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.859357 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:10Z","lastTransitionTime":"2026-02-16T13:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.878740 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.878845 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:33:10 crc kubenswrapper[4812]: E0216 13:33:10.879536 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:33:10 crc kubenswrapper[4812]: E0216 13:33:10.879687 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.880116 4812 scope.go:117] "RemoveContainer" containerID="c46752e9b7b26727ab1a278c44c87ac6fc14d7f5fda7f3332d47815dbae3ec19" Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.894821 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 07:23:28.549734905 +0000 UTC Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.961591 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.961624 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.961634 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.961649 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:10 crc kubenswrapper[4812]: I0216 13:33:10.961658 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:10Z","lastTransitionTime":"2026-02-16T13:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.063843 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.063894 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.063907 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.063924 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.063935 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:11Z","lastTransitionTime":"2026-02-16T13:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.171375 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.171504 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.171545 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.171583 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.171627 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:11Z","lastTransitionTime":"2026-02-16T13:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.274186 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.274220 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.274242 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.274258 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.274270 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:11Z","lastTransitionTime":"2026-02-16T13:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.375997 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pzksg_a67ca714-af04-4a76-8a28-54d47f66b1fa/ovnkube-controller/2.log" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.376123 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.376167 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.376178 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.376194 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.376205 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:11Z","lastTransitionTime":"2026-02-16T13:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.378197 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" event={"ID":"a67ca714-af04-4a76-8a28-54d47f66b1fa","Type":"ContainerStarted","Data":"b9d4167a71f0c30ceeb980a74964baf5bdf78713153d1f3cac40fc4c79cbf69d"} Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.379035 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.390521 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:11Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.404019 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2hhp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"934e533e-cc26-4770-af67-3dbcaa0dab5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cd21781b359c2f631527e3e4e2649264ee4e64c5d3cc9842fb9106017565c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a9a856d7be88b316dbac827e3998da5df55019101f11f2e6e7d763a78c4a80b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T13:33:01Z\\\",\\\"message\\\":\\\"2026-02-16T13:32:16+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_30129742-2774-4438-ab09-c8d07b65190b\\\\n2026-02-16T13:32:16+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_30129742-2774-4438-ab09-c8d07b65190b to /host/opt/cni/bin/\\\\n2026-02-16T13:32:16Z [verbose] multus-daemon started\\\\n2026-02-16T13:32:16Z [verbose] Readiness Indicator file check\\\\n2026-02-16T13:33:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:33:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2xc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2hhp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:11Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.415750 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c55e49a-a30d-4950-a690-c33d9f8a31e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff320daab8e42aacfbe2cb2ef91f064cfd950840978ca9aaf3e11ea987dc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe6225f5f96d52b832cfbd8ecb30748dfcd053b9d95a512bfbe0c6c4aa7c0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6mn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:11Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.428056 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f43b73-bad3-4883-ad0b-4a8df6824248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1001938f74f193628f1faf5fada6a26836e7053b9f46656912df4c17ac11a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5304f578aa4297d0e9e57669cb571d178fada6972553487f8f3e6a6a2422175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b778bd4c0e3d5d5158269f30b9646da76918d07d5a79e274b0b70ad1b4f38623\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94212f31a24fe0258d7c5241438e6ec1daf60b49bb87365fa851ae0a5628de1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:11Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.437516 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed25531-078f-4432-b260-2dc45d63eed7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7c33b5d95fa2865d325956c87e1024adf7bf0a40ef2e590b467f9cee892138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e12743ac6a6f99e8efd74b1fcafc157b16793659394ef957ca99f5886fd478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e12743ac6a6f99e8efd74b1fcafc157b16793659394ef957ca99f5886fd478\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:11Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.450339 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc49bb6ecf7f6197ff2e3e76e7e51bb0842fec86477b85c62d1930170ac91d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:11Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.464962 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"145eec20-9328-4b99-b0ec-4870b6761385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6366360c4a0ce6a9882aa517e034c45ef65a6ae8bf19f797f3ddaa99b8a76469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8g94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:11Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.478867 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.478928 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.478941 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.478967 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.478984 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:11Z","lastTransitionTime":"2026-02-16T13:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.480235 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gt4zb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e823c28d-cc96-469c-a794-fb12a7ae6172\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d34a9809d2ade6a3b1cb69520313bd3886db5c666762ca9ce993b5251331fa7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms9qd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f8618e97cfc7a7d6673243425c62cfd9633fca65005e96b63e924c303f7e5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms9qd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gt4zb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:11Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.495179 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4574e2db-75d7-4da6-bdf8-84a06c617799\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85905db3e100d71dfb29420eccfd9a129be4b9a6950a8e5e2915d7f8aabcc255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858f53f244902f66ee53409db591138aba707c545b1f7cc0da69a691be1e2138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3e9624fe4d351638e9b45a1d575c06a3c9e7e12a77dcd8cb6a61996fe51fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cd9201d7dd9b07b8b725890b21f0f06ee3dd3ff839e3fc6c2f4b9d4d736c01e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cd9201d7dd9b07b8b725890b21f0f06ee3dd3ff839e3fc6c2f4b9d4d736c01e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:11Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.506562 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:11Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.516119 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5w4kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f07f8fe-99f2-4f2e-b9f8-56841d756064\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6394bf9fe0ec8408dd8bfb0105dcc686f3a820850e75ce47db441f04d22c1c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5w4kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:11Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.530990 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b29081-a34f-4671-85a3-e1bc2b16d37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f92165c7178abd9e054861a9eb371353d050b922ba46c2a29027462832abff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 13:32:11.932704 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 13:32:11.933194 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 13:32:11.934094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1719573121/tls.crt::/tmp/serving-cert-1719573121/tls.key\\\\\\\"\\\\nI0216 13:32:12.377062 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 13:32:12.379783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 13:32:12.379801 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 13:32:12.379819 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 13:32:12.379825 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 13:32:12.383118 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 13:32:12.383163 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 13:32:12.383168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 13:32:12.383182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 13:32:12.383185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 13:32:12.383188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 13:32:12.384888 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:11Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.542909 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78ca0949b586a8839b02c0e01bd92571f3c4eaa156cfcd58546607c30cdbdcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:11Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.553434 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://367d830f42471f794953f9aa4cb979d9c3dac3ef0f2364661058c50cc1b0d055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f6091f13d04f859b29636375d3a27e07eeaee338f0da4a70b471f7f26250e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:11Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.565608 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:11Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.576545 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p9b2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a86c16e138e436ef3976c554ab3d70e39a7515f662e305558513a633db53142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2tjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p9b2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:11Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.581312 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.581349 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.581358 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.581372 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.581381 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:11Z","lastTransitionTime":"2026-02-16T13:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.593722 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67ca714-af04-4a76-8a28-54d47f66b1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9d4167a71f0c30ceeb980a74964baf5bdf78713153d1f3cac40fc4c79cbf69d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c46752e9b7b26727ab1a278c44c87ac6fc14d7f5fda7f3332d47815dbae3ec19\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T13:32:40Z\\\",\\\"message\\\":\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-diagnostics/network-check-target\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.219\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0216 13:32:40.612720 6488 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:40Z \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzksg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:11Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.602821 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-szt79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a1f0c6-cafa-4c67-a2ad-d6003e88613c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md4ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md4ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-szt79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:11Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.684059 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.684098 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.684109 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.684124 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.684134 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:11Z","lastTransitionTime":"2026-02-16T13:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.786047 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.786097 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.786111 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.786142 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.786152 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:11Z","lastTransitionTime":"2026-02-16T13:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.878997 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.879035 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:33:11 crc kubenswrapper[4812]: E0216 13:33:11.879233 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:33:11 crc kubenswrapper[4812]: E0216 13:33:11.879542 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.888924 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.888971 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.888980 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.888993 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.889020 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:11Z","lastTransitionTime":"2026-02-16T13:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.895269 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 14:43:09.049223987 +0000 UTC Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.899723 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gt4zb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e823c28d-cc96-469c-a794-fb12a7ae6172\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d34a9809d2ade6a3b1cb69520313bd3886db5c666762ca9ce993b5251331fa7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms9qd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f8618e97cfc7a7d6673243425c62cfd9633fca65005e96b63e924c303f7e5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms9qd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gt4zb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:11Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.916431 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f43b73-bad3-4883-ad0b-4a8df6824248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1001938f74f193628f1faf5fada6a26836e7053b9f46656912df4c17ac11a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5304f578aa4297d0e9e57669cb571d178fada6972553487f8f3e6a6a2422175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b778bd4c0e3d5d5158269f30b9646da76918d07d5a79e274b0b70ad1b4f38623\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94212f31a24fe0258d7c5241438e6ec1daf60b49bb87365fa851ae0a5628de1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:11Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.933231 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed25531-078f-4432-b260-2dc45d63eed7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7c33b5d95fa2865d325956c87e1024adf7bf0a40ef2e590b467f9cee892138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e12743ac6a6f99e8efd74b1fcafc157b16793659394ef957ca99f5886fd478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e12743ac6a6f99e8efd74b1fcafc157b16793659394ef957ca99f5886fd478\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:11Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.953814 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc49bb6ecf7f6197ff2e3e76e7e51bb0842fec86477b85c62d1930170ac91d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:11Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.973631 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"145eec20-9328-4b99-b0ec-4870b6761385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6366360c4a0ce6a9882aa517e034c45ef65a6ae8bf19f797f3ddaa99b8a76469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8g94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:11Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.991813 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4574e2db-75d7-4da6-bdf8-84a06c617799\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85905db3e100d71dfb29420eccfd9a129be4b9a6950a8e5e2915d7f8aabcc255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858f53f244902f66ee53409db591138aba707c545b1f7cc0da69a691be1e2138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3e9624fe4d351638e9b45a1d575c06a3c9e7e12a77dcd8cb6a61996fe51fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cd9201d7dd9b07b8b725890b21f0f06ee3dd3ff839e3fc6c2f4b9d4d736c01e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cd9201d7dd9b07b8b725890b21f0f06ee3dd3ff839e3fc6c2f4b9d4d736c01e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:11Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.992077 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.992111 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.992125 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.992145 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:11 crc kubenswrapper[4812]: I0216 13:33:11.992164 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:11Z","lastTransitionTime":"2026-02-16T13:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.011555 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:12Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.022133 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5w4kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f07f8fe-99f2-4f2e-b9f8-56841d756064\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6394bf9fe0ec8408dd8bfb0105dcc686f3a820850e75ce47db441f04d22c1c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5w4kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:12Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.033169 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:12Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.041888 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p9b2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a86c16e138e436ef3976c554ab3d70e39a7515f662e305558513a633db53142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2tjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p9b2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:12Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.060175 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67ca714-af04-4a76-8a28-54d47f66b1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9d4167a71f0c30ceeb980a74964baf5bdf78713153d1f3cac40fc4c79cbf69d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c46752e9b7b26727ab1a278c44c87ac6fc14d7f5fda7f3332d47815dbae3ec19\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T13:32:40Z\\\",\\\"message\\\":\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-diagnostics/network-check-target\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.219\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0216 13:32:40.612720 6488 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:40Z \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzksg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:12Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.070082 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-szt79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a1f0c6-cafa-4c67-a2ad-d6003e88613c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md4ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md4ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-szt79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:12Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.082791 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b29081-a34f-4671-85a3-e1bc2b16d37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f92165c7178abd9e054861a9eb371353d050b922ba46c2a29027462832abff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 13:32:11.932704 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 13:32:11.933194 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 13:32:11.934094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1719573121/tls.crt::/tmp/serving-cert-1719573121/tls.key\\\\\\\"\\\\nI0216 13:32:12.377062 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 13:32:12.379783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 13:32:12.379801 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 13:32:12.379819 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 13:32:12.379825 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 13:32:12.383118 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 13:32:12.383163 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 13:32:12.383168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 13:32:12.383182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 13:32:12.383185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 13:32:12.383188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 13:32:12.384888 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:12Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.092681 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78ca0949b586a8839b02c0e01bd92571f3c4eaa156cfcd58546607c30cdbdcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:12Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.094394 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.094467 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.094479 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.094497 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.094508 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:12Z","lastTransitionTime":"2026-02-16T13:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.104704 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://367d830f42471f794953f9aa4cb979d9c3dac3ef0f2364661058c50cc1b0d055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f6091f13d04f859b29636375d3a27e07eeaee338f0da4a70b471f7f26250e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:12Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.117285 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:12Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.129715 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2hhp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"934e533e-cc26-4770-af67-3dbcaa0dab5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cd21781b359c2f631527e3e4e2649264ee4e64c5d3cc9842fb9106017565c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a9a856d7be88b316dbac827e3998da5df55019101f11f2e6e7d763a78c4a80b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T13:33:01Z\\\",\\\"message\\\":\\\"2026-02-16T13:32:16+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_30129742-2774-4438-ab09-c8d07b65190b\\\\n2026-02-16T13:32:16+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_30129742-2774-4438-ab09-c8d07b65190b to /host/opt/cni/bin/\\\\n2026-02-16T13:32:16Z [verbose] multus-daemon started\\\\n2026-02-16T13:32:16Z [verbose] Readiness Indicator file check\\\\n2026-02-16T13:33:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:33:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2xc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2hhp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:12Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.139970 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c55e49a-a30d-4950-a690-c33d9f8a31e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff320daab8e42aacfbe2cb2ef91f064cfd950840978ca9aaf3e11ea987dc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe6225f5f96d52b832cfbd8ecb30748dfcd053b9d95a512bfbe0c6c4aa7c0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6mn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:12Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.197017 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.197047 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.197055 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.197068 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.197077 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:12Z","lastTransitionTime":"2026-02-16T13:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.299753 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.299795 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.299806 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.299822 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.299831 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:12Z","lastTransitionTime":"2026-02-16T13:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.383208 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pzksg_a67ca714-af04-4a76-8a28-54d47f66b1fa/ovnkube-controller/3.log" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.383883 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pzksg_a67ca714-af04-4a76-8a28-54d47f66b1fa/ovnkube-controller/2.log" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.386462 4812 generic.go:334] "Generic (PLEG): container finished" podID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerID="b9d4167a71f0c30ceeb980a74964baf5bdf78713153d1f3cac40fc4c79cbf69d" exitCode=1 Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.386494 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" event={"ID":"a67ca714-af04-4a76-8a28-54d47f66b1fa","Type":"ContainerDied","Data":"b9d4167a71f0c30ceeb980a74964baf5bdf78713153d1f3cac40fc4c79cbf69d"} Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.386527 4812 scope.go:117] "RemoveContainer" containerID="c46752e9b7b26727ab1a278c44c87ac6fc14d7f5fda7f3332d47815dbae3ec19" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.387291 4812 scope.go:117] "RemoveContainer" containerID="b9d4167a71f0c30ceeb980a74964baf5bdf78713153d1f3cac40fc4c79cbf69d" Feb 16 13:33:12 crc kubenswrapper[4812]: E0216 13:33:12.387529 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pzksg_openshift-ovn-kubernetes(a67ca714-af04-4a76-8a28-54d47f66b1fa)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.400399 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f43b73-bad3-4883-ad0b-4a8df6824248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1001938f74f193628f1faf5fada6a26836e7053b9f46656912df4c17ac11a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5304f578aa4297d0e9e57669cb571d178fada6972553487f8f3e6a6a2422175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b778bd4c0e3d5d5158269f30b9646da76918d07d5a79e274b0b70ad1b4f38623\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94212f31a24fe0258d7c5241438e6ec1daf60b49bb87365fa851ae0a5628de1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:12Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.401804 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.401868 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.401881 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.401900 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.401910 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:12Z","lastTransitionTime":"2026-02-16T13:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.410519 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed25531-078f-4432-b260-2dc45d63eed7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7c33b5d95fa2865d325956c87e1024adf7bf0a40ef2e590b467f9cee892138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e12743ac6a6f99e8efd74b1fcafc157b16793659394ef957ca99f5886fd478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e12743ac6a6f99e8efd74b1fcafc157b16793659394ef957ca99f5886fd478\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:12Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.421404 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc49bb6ecf7f6197ff2e3e76e7e51bb0842fec86477b85c62d1930170ac91d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:12Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.434689 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"145eec20-9328-4b99-b0ec-4870b6761385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6366360c4a0ce6a9882aa517e034c45ef65a6ae8bf19f797f3ddaa99b8a76469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8g94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:12Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.445310 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gt4zb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e823c28d-cc96-469c-a794-fb12a7ae6172\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d34a9809d2ade6a3b1cb69520313bd3886db5c666762ca9ce993b5251331fa7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms9qd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f8618e97cfc7a7d6673243425c62cfd9633fca65005e96b63e924c303f7e5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms9qd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gt4zb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:12Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.455739 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4574e2db-75d7-4da6-bdf8-84a06c617799\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85905db3e100d71dfb29420eccfd9a129be4b9a6950a8e5e2915d7f8aabcc255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858f53f244902f66ee53409db591138aba707c545b1f7cc0da69a691be1e2138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3e9624fe4d351638e9b45a1d575c06a3c9e7e12a77dcd8cb6a61996fe51fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cd9201d7dd9b07b8b725890b21f0f06ee3dd3ff839e3fc6c2f4b9d4d736c01e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cd9201d7dd9b07b8b725890b21f0f06ee3dd3ff839e3fc6c2f4b9d4d736c01e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:12Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.469506 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:12Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.479684 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5w4kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f07f8fe-99f2-4f2e-b9f8-56841d756064\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6394bf9fe0ec8408dd8bfb0105dcc686f3a820850e75ce47db441f04d22c1c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5w4kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:12Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.492348 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b29081-a34f-4671-85a3-e1bc2b16d37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f92165c7178abd9e054861a9eb371353d050b922ba46c2a29027462832abff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 13:32:11.932704 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 13:32:11.933194 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 13:32:11.934094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1719573121/tls.crt::/tmp/serving-cert-1719573121/tls.key\\\\\\\"\\\\nI0216 13:32:12.377062 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 13:32:12.379783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 13:32:12.379801 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 13:32:12.379819 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 13:32:12.379825 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 13:32:12.383118 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 13:32:12.383163 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 13:32:12.383168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 13:32:12.383182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 13:32:12.383185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 13:32:12.383188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 13:32:12.384888 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:12Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.504617 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.504654 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.504664 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.504597 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78ca0949b586a8839b02c0e01bd92571f3c4eaa156cfcd58546607c30cdbdcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:12Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.504680 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.504692 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:12Z","lastTransitionTime":"2026-02-16T13:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.518620 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://367d830f42471f794953f9aa4cb979d9c3dac3ef0f2364661058c50cc1b0d055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f6091f13d04f859b29636375d3a27e07eeaee338f0da4a70b471f7f26250e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:12Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.530814 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:12Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.541809 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p9b2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a86c16e138e436ef3976c554ab3d70e39a7515f662e305558513a633db53142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2tjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p9b2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:12Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.562020 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67ca714-af04-4a76-8a28-54d47f66b1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9d4167a71f0c30ceeb980a74964baf5bdf78713153d1f3cac40fc4c79cbf69d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c46752e9b7b26727ab1a278c44c87ac6fc14d7f5fda7f3332d47815dbae3ec19\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T13:32:40Z\\\",\\\"message\\\":\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-diagnostics/network-check-target\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.219\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0216 13:32:40.612720 6488 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:32:40Z \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9d4167a71f0c30ceeb980a74964baf5bdf78713153d1f3cac40fc4c79cbf69d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T13:33:11Z\\\",\\\"message\\\":\\\"6 6888 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-5w4kf\\\\nI0216 13:33:11.698505 6888 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 13:33:11.698512 6888 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0216 13:33:11.698505 6888 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 13:33:11.698516 6888 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 13:33:11.698523 6888 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0216 13:33:11.698525 6888 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0216 13:33:11.698536 6888 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0216 13:33:11.698537 6888 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0216 13:33:11.698543 6888 ovn.go:134]\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzksg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:12Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.574035 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-szt79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a1f0c6-cafa-4c67-a2ad-d6003e88613c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md4ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md4ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-szt79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:12Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.601633 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:12Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.608034 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.608066 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.608074 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.608089 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.608099 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:12Z","lastTransitionTime":"2026-02-16T13:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.638467 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2hhp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"934e533e-cc26-4770-af67-3dbcaa0dab5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cd21781b359c2f631527e3e4e2649264ee4e64c5d3cc9842fb9106017565c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a9a856d7be88b316dbac827e3998da5df55019101f11f2e6e7d763a78c4a80b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T13:33:01Z\\\",\\\"message\\\":\\\"2026-02-16T13:32:16+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_30129742-2774-4438-ab09-c8d07b65190b\\\\n2026-02-16T13:32:16+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_30129742-2774-4438-ab09-c8d07b65190b to /host/opt/cni/bin/\\\\n2026-02-16T13:32:16Z [verbose] multus-daemon started\\\\n2026-02-16T13:32:16Z [verbose] Readiness Indicator file check\\\\n2026-02-16T13:33:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:33:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2xc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2hhp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:12Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.650457 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c55e49a-a30d-4950-a690-c33d9f8a31e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff320daab8e42aacfbe2cb2ef91f064cfd950840978ca9aaf3e11ea987dc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe6225f5f96d52b832cfbd8ecb30748dfcd053b9d95a512bfbe0c6c4aa7c0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6mn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:12Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.709939 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.709987 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.710003 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.710024 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.710042 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:12Z","lastTransitionTime":"2026-02-16T13:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.812474 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.812529 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.812545 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.812567 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.812584 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:12Z","lastTransitionTime":"2026-02-16T13:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.878936 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.879017 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:33:12 crc kubenswrapper[4812]: E0216 13:33:12.879109 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:33:12 crc kubenswrapper[4812]: E0216 13:33:12.879219 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.896296 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 04:05:27.968118402 +0000 UTC Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.915259 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.915307 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.915324 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.915348 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:12 crc kubenswrapper[4812]: I0216 13:33:12.915366 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:12Z","lastTransitionTime":"2026-02-16T13:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.019003 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.019083 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.019107 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.019134 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.019154 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:13Z","lastTransitionTime":"2026-02-16T13:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.121917 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.121974 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.121989 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.122014 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.122031 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:13Z","lastTransitionTime":"2026-02-16T13:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.224888 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.224926 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.224938 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.224954 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.224968 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:13Z","lastTransitionTime":"2026-02-16T13:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.327953 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.328002 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.328013 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.328032 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.328046 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:13Z","lastTransitionTime":"2026-02-16T13:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.391794 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pzksg_a67ca714-af04-4a76-8a28-54d47f66b1fa/ovnkube-controller/3.log" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.395089 4812 scope.go:117] "RemoveContainer" containerID="b9d4167a71f0c30ceeb980a74964baf5bdf78713153d1f3cac40fc4c79cbf69d" Feb 16 13:33:13 crc kubenswrapper[4812]: E0216 13:33:13.395239 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pzksg_openshift-ovn-kubernetes(a67ca714-af04-4a76-8a28-54d47f66b1fa)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.408213 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b29081-a34f-4671-85a3-e1bc2b16d37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f92165c7178abd9e054861a9eb371353d050b922ba46c2a29027462832abff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 13:32:11.932704 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 13:32:11.933194 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 13:32:11.934094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1719573121/tls.crt::/tmp/serving-cert-1719573121/tls.key\\\\\\\"\\\\nI0216 13:32:12.377062 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 13:32:12.379783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 13:32:12.379801 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 13:32:12.379819 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 13:32:12.379825 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 13:32:12.383118 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 13:32:12.383163 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 13:32:12.383168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 13:32:12.383182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 13:32:12.383185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 13:32:12.383188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 13:32:12.384888 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:13Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.419883 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78ca0949b586a8839b02c0e01bd92571f3c4eaa156cfcd58546607c30cdbdcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:13Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.430256 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.430315 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.430327 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.430346 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.430362 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:13Z","lastTransitionTime":"2026-02-16T13:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.433902 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://367d830f42471f794953f9aa4cb979d9c3dac3ef0f2364661058c50cc1b0d055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f6091f13d04f859b29636375d3a27e07eeaee338f0da4a70b471f7f26250e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:13Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.445980 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:13Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.456331 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p9b2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a86c16e138e436ef3976c554ab3d70e39a7515f662e305558513a633db53142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2tjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p9b2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:13Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.474990 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67ca714-af04-4a76-8a28-54d47f66b1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9d4167a71f0c30ceeb980a74964baf5bdf78713153d1f3cac40fc4c79cbf69d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9d4167a71f0c30ceeb980a74964baf5bdf78713153d1f3cac40fc4c79cbf69d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T13:33:11Z\\\",\\\"message\\\":\\\"6 6888 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-5w4kf\\\\nI0216 13:33:11.698505 6888 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 13:33:11.698512 6888 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0216 13:33:11.698505 6888 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 13:33:11.698516 6888 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 13:33:11.698523 6888 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0216 13:33:11.698525 6888 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0216 13:33:11.698536 6888 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0216 13:33:11.698537 6888 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0216 13:33:11.698543 6888 ovn.go:134]\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:33:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pzksg_openshift-ovn-kubernetes(a67ca714-af04-4a76-8a28-54d47f66b1fa)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzksg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:13Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.485409 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-szt79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a1f0c6-cafa-4c67-a2ad-d6003e88613c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md4ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md4ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-szt79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:13Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.497956 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:13Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.514483 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2hhp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"934e533e-cc26-4770-af67-3dbcaa0dab5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cd21781b359c2f631527e3e4e2649264ee4e64c5d3cc9842fb9106017565c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a9a856d7be88b316dbac827e3998da5df55019101f11f2e6e7d763a78c4a80b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T13:33:01Z\\\",\\\"message\\\":\\\"2026-02-16T13:32:16+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_30129742-2774-4438-ab09-c8d07b65190b\\\\n2026-02-16T13:32:16+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_30129742-2774-4438-ab09-c8d07b65190b to /host/opt/cni/bin/\\\\n2026-02-16T13:32:16Z [verbose] multus-daemon started\\\\n2026-02-16T13:32:16Z [verbose] Readiness Indicator file check\\\\n2026-02-16T13:33:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:33:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2xc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2hhp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:13Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.525636 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c55e49a-a30d-4950-a690-c33d9f8a31e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff320daab8e42aacfbe2cb2ef91f064cfd950840978ca9aaf3e11ea987dc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe6225f5f96d52b832cfbd8ecb30748dfcd053b9d95a512bfbe0c6c4aa7c0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6mn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:13Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.533175 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.533207 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.533216 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.533230 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.533239 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:13Z","lastTransitionTime":"2026-02-16T13:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.537771 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f43b73-bad3-4883-ad0b-4a8df6824248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1001938f74f193628f1faf5fada6a26836e7053b9f46656912df4c17ac11a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5304f578aa4297d0e9e57669cb571d178fada6972553487f8f3e6a6a2422175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b778bd4c0e3d5d5158269f30b9646da76918d07d5a79e274b0b70ad1b4f38623\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94212f31a24fe0258d7c5241438e6ec1daf60b49bb87365fa851ae0a5628de1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:13Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.547666 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed25531-078f-4432-b260-2dc45d63eed7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7c33b5d95fa2865d325956c87e1024adf7bf0a40ef2e590b467f9cee892138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e12743ac6a6f99e8efd74b1fcafc157b16793659394ef957ca99f5886fd478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e12743ac6a6f99e8efd74b1fcafc157b16793659394ef957ca99f5886fd478\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:13Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.559804 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc49bb6ecf7f6197ff2e3e76e7e51bb0842fec86477b85c62d1930170ac91d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:13Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.583655 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"145eec20-9328-4b99-b0ec-4870b6761385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6366360c4a0ce6a9882aa517e034c45ef65a6ae8bf19f797f3ddaa99b8a76469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8g94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:13Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.594287 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gt4zb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e823c28d-cc96-469c-a794-fb12a7ae6172\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d34a9809d2ade6a3b1cb69520313bd3886db5c666762ca9ce993b5251331fa7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms9qd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f8618e97cfc7a7d6673243425c62cfd9633fca65005e96b63e924c303f7e5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms9qd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gt4zb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:13Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.605695 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4574e2db-75d7-4da6-bdf8-84a06c617799\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85905db3e100d71dfb29420eccfd9a129be4b9a6950a8e5e2915d7f8aabcc255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858f53f244902f66ee53409db591138aba707c545b1f7cc0da69a691be1e2138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3e9624fe4d351638e9b45a1d575c06a3c9e7e12a77dcd8cb6a61996fe51fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cd9201d7dd9b07b8b725890b21f0f06ee3dd3ff839e3fc6c2f4b9d4d736c01e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cd9201d7dd9b07b8b725890b21f0f06ee3dd3ff839e3fc6c2f4b9d4d736c01e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:13Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.616522 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:13Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.625720 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5w4kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f07f8fe-99f2-4f2e-b9f8-56841d756064\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6394bf9fe0ec8408dd8bfb0105dcc686f3a820850e75ce47db441f04d22c1c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5w4kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:13Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.635164 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.635197 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.635212 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.635227 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.635237 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:13Z","lastTransitionTime":"2026-02-16T13:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.737005 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.737047 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.737056 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.737074 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.737083 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:13Z","lastTransitionTime":"2026-02-16T13:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.840508 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.840604 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.840618 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.840640 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.840653 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:13Z","lastTransitionTime":"2026-02-16T13:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.878283 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.878376 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:33:13 crc kubenswrapper[4812]: E0216 13:33:13.878546 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:33:13 crc kubenswrapper[4812]: E0216 13:33:13.878657 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.897119 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 09:24:16.427979893 +0000 UTC Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.944184 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.944246 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.944258 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.944278 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:13 crc kubenswrapper[4812]: I0216 13:33:13.944294 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:13Z","lastTransitionTime":"2026-02-16T13:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.047375 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.047422 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.047436 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.047480 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.047495 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:14Z","lastTransitionTime":"2026-02-16T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.149858 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.149889 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.149899 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.149913 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.149923 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:14Z","lastTransitionTime":"2026-02-16T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.253094 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.253159 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.253176 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.253198 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.253215 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:14Z","lastTransitionTime":"2026-02-16T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.356143 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.356203 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.356214 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.356237 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.356253 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:14Z","lastTransitionTime":"2026-02-16T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.458673 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.458781 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.458800 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.458831 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.458848 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:14Z","lastTransitionTime":"2026-02-16T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.561990 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.562066 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.562082 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.562109 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.562124 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:14Z","lastTransitionTime":"2026-02-16T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.665850 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.665920 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.665941 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.665965 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.665986 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:14Z","lastTransitionTime":"2026-02-16T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.768820 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.768894 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.768909 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.768932 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.768950 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:14Z","lastTransitionTime":"2026-02-16T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.871980 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.872069 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.872085 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.872111 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.872128 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:14Z","lastTransitionTime":"2026-02-16T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.878300 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.878300 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:33:14 crc kubenswrapper[4812]: E0216 13:33:14.878563 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:33:14 crc kubenswrapper[4812]: E0216 13:33:14.878648 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.897768 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 03:58:36.033664471 +0000 UTC Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.980731 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.981097 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.981112 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.981134 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:14 crc kubenswrapper[4812]: I0216 13:33:14.981152 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:14Z","lastTransitionTime":"2026-02-16T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.084098 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.084137 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.084146 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.084162 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.084174 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:15Z","lastTransitionTime":"2026-02-16T13:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.187209 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.187256 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.187264 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.187278 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.187287 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:15Z","lastTransitionTime":"2026-02-16T13:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.289792 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.289836 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.289847 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.289863 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.289874 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:15Z","lastTransitionTime":"2026-02-16T13:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.392994 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.393056 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.393067 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.393098 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.393111 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:15Z","lastTransitionTime":"2026-02-16T13:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.495372 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.495422 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.495436 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.495480 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.495492 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:15Z","lastTransitionTime":"2026-02-16T13:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.598288 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.598354 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.598372 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.598396 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.598416 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:15Z","lastTransitionTime":"2026-02-16T13:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.701832 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.701877 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.701887 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.701903 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.701916 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:15Z","lastTransitionTime":"2026-02-16T13:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.804504 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.804555 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.804570 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.804586 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.804596 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:15Z","lastTransitionTime":"2026-02-16T13:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.878719 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.878778 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:33:15 crc kubenswrapper[4812]: E0216 13:33:15.878879 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:33:15 crc kubenswrapper[4812]: E0216 13:33:15.878973 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.898222 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 05:51:13.626792598 +0000 UTC Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.907220 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.907277 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.907300 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.907347 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:15 crc kubenswrapper[4812]: I0216 13:33:15.907373 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:15Z","lastTransitionTime":"2026-02-16T13:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.011180 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.011248 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.011270 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.011300 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.011325 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:16Z","lastTransitionTime":"2026-02-16T13:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.114065 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.114151 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.114171 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.114193 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.114212 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:16Z","lastTransitionTime":"2026-02-16T13:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.217070 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.217105 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.217114 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.217129 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.217137 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:16Z","lastTransitionTime":"2026-02-16T13:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.319175 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.319244 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.319269 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.319353 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.319379 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:16Z","lastTransitionTime":"2026-02-16T13:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.422036 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.422105 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.422127 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.422158 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.422182 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:16Z","lastTransitionTime":"2026-02-16T13:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.525768 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.525807 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.525815 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.525830 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.525840 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:16Z","lastTransitionTime":"2026-02-16T13:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.624407 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:33:16 crc kubenswrapper[4812]: E0216 13:33:16.624765 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:20.624723413 +0000 UTC m=+149.689054174 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.628970 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.629017 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.629030 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.629047 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.629058 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:16Z","lastTransitionTime":"2026-02-16T13:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.725980 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.726053 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.726114 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:33:16 crc kubenswrapper[4812]: E0216 13:33:16.726169 4812 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 13:33:16 crc kubenswrapper[4812]: E0216 13:33:16.726236 4812 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 13:33:16 crc kubenswrapper[4812]: E0216 13:33:16.726262 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 13:34:20.726239269 +0000 UTC m=+149.790570050 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 13:33:16 crc kubenswrapper[4812]: E0216 13:33:16.726278 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 13:34:20.72626904 +0000 UTC m=+149.790599741 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.726190 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:33:16 crc kubenswrapper[4812]: E0216 13:33:16.726347 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 13:33:16 crc kubenswrapper[4812]: E0216 13:33:16.726375 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 13:33:16 crc kubenswrapper[4812]: E0216 13:33:16.726393 4812 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 13:33:16 crc kubenswrapper[4812]: E0216 13:33:16.726403 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 13:33:16 crc kubenswrapper[4812]: E0216 13:33:16.726478 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 13:33:16 crc kubenswrapper[4812]: E0216 13:33:16.726498 4812 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 13:33:16 crc kubenswrapper[4812]: E0216 13:33:16.726508 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 13:34:20.726438425 +0000 UTC m=+149.790769156 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 13:33:16 crc kubenswrapper[4812]: E0216 13:33:16.726572 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 13:34:20.726547118 +0000 UTC m=+149.790877859 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.731681 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.731721 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.731764 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.731779 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.731789 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:16Z","lastTransitionTime":"2026-02-16T13:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.834588 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.834653 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.834682 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.834708 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.834727 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:16Z","lastTransitionTime":"2026-02-16T13:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.878221 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:33:16 crc kubenswrapper[4812]: E0216 13:33:16.878392 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.878734 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:33:16 crc kubenswrapper[4812]: E0216 13:33:16.879007 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.898437 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 00:59:51.762919305 +0000 UTC Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.937433 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.937533 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.937552 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.937575 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:16 crc kubenswrapper[4812]: I0216 13:33:16.937594 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:16Z","lastTransitionTime":"2026-02-16T13:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.040226 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.040278 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.040290 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.040324 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.040336 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:17Z","lastTransitionTime":"2026-02-16T13:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.142581 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.142648 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.142662 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.142677 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.142692 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:17Z","lastTransitionTime":"2026-02-16T13:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.244749 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.244806 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.244818 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.244835 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.244848 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:17Z","lastTransitionTime":"2026-02-16T13:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.348119 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.348165 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.348176 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.348191 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.348201 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:17Z","lastTransitionTime":"2026-02-16T13:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.450533 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.450585 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.450596 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.450612 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.450624 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:17Z","lastTransitionTime":"2026-02-16T13:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.554101 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.554143 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.554158 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.554182 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.554193 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:17Z","lastTransitionTime":"2026-02-16T13:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.658248 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.658296 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.658306 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.658324 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.658338 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:17Z","lastTransitionTime":"2026-02-16T13:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.761190 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.761224 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.761232 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.761247 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.761256 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:17Z","lastTransitionTime":"2026-02-16T13:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.864352 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.864400 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.864410 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.864425 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.864435 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:17Z","lastTransitionTime":"2026-02-16T13:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.878928 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.878995 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:33:17 crc kubenswrapper[4812]: E0216 13:33:17.879115 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:33:17 crc kubenswrapper[4812]: E0216 13:33:17.879244 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.898846 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 19:25:39.327912413 +0000 UTC Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.966542 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.966596 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.966607 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.966623 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:17 crc kubenswrapper[4812]: I0216 13:33:17.966634 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:17Z","lastTransitionTime":"2026-02-16T13:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.069926 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.069966 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.069979 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.069998 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.070049 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:18Z","lastTransitionTime":"2026-02-16T13:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.172684 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.172728 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.172739 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.172754 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.172766 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:18Z","lastTransitionTime":"2026-02-16T13:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.275621 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.275669 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.275679 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.275692 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.275702 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:18Z","lastTransitionTime":"2026-02-16T13:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.378110 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.378173 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.378186 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.378205 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.378216 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:18Z","lastTransitionTime":"2026-02-16T13:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.480304 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.480343 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.480354 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.480368 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.480378 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:18Z","lastTransitionTime":"2026-02-16T13:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.582812 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.582941 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.582953 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.582967 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.582979 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:18Z","lastTransitionTime":"2026-02-16T13:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.686208 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.686263 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.686282 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.686305 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.686322 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:18Z","lastTransitionTime":"2026-02-16T13:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.788661 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.788714 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.788728 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.788749 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.788759 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:18Z","lastTransitionTime":"2026-02-16T13:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.878047 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:33:18 crc kubenswrapper[4812]: E0216 13:33:18.878345 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.878047 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:33:18 crc kubenswrapper[4812]: E0216 13:33:18.878866 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.891638 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.891689 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.891702 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.891720 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.891734 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:18Z","lastTransitionTime":"2026-02-16T13:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.899557 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 17:19:36.828674316 +0000 UTC Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.993693 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.993743 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.993754 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.993771 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:18 crc kubenswrapper[4812]: I0216 13:33:18.993786 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:18Z","lastTransitionTime":"2026-02-16T13:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.096990 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.097043 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.097054 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.097072 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.097088 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:19Z","lastTransitionTime":"2026-02-16T13:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.199496 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.199552 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.199569 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.199592 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.199609 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:19Z","lastTransitionTime":"2026-02-16T13:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.301637 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.301681 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.301690 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.301703 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.301712 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:19Z","lastTransitionTime":"2026-02-16T13:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.403927 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.404020 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.404041 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.404067 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.404089 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:19Z","lastTransitionTime":"2026-02-16T13:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.506751 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.506792 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.506803 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.506819 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.506830 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:19Z","lastTransitionTime":"2026-02-16T13:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.609009 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.609044 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.609052 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.609065 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.609075 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:19Z","lastTransitionTime":"2026-02-16T13:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.711577 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.711641 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.711657 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.711673 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.711684 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:19Z","lastTransitionTime":"2026-02-16T13:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.764458 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.764515 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.764529 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.764546 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.764557 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:19Z","lastTransitionTime":"2026-02-16T13:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:19 crc kubenswrapper[4812]: E0216 13:33:19.780687 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:33:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:33:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:33:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:33:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4981c762-995f-430f-ab9d-bca26618d78a\\\",\\\"systemUUID\\\":\\\"a8093dd5-8447-4cfc-ac6f-47d191141ed0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:19Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.784470 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.784501 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.784510 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.784524 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.784534 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:19Z","lastTransitionTime":"2026-02-16T13:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:19 crc kubenswrapper[4812]: E0216 13:33:19.796472 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:33:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:33:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:33:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:33:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4981c762-995f-430f-ab9d-bca26618d78a\\\",\\\"systemUUID\\\":\\\"a8093dd5-8447-4cfc-ac6f-47d191141ed0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:19Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.800189 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.800234 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.800245 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.800262 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.800273 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:19Z","lastTransitionTime":"2026-02-16T13:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:19 crc kubenswrapper[4812]: E0216 13:33:19.813160 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:33:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:33:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:33:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:33:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4981c762-995f-430f-ab9d-bca26618d78a\\\",\\\"systemUUID\\\":\\\"a8093dd5-8447-4cfc-ac6f-47d191141ed0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:19Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.816904 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.816942 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.816954 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.816969 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.816980 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:19Z","lastTransitionTime":"2026-02-16T13:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:19 crc kubenswrapper[4812]: E0216 13:33:19.830816 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:33:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:33:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:33:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:33:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4981c762-995f-430f-ab9d-bca26618d78a\\\",\\\"systemUUID\\\":\\\"a8093dd5-8447-4cfc-ac6f-47d191141ed0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:19Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.834376 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.834415 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.834426 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.834464 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.834475 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:19Z","lastTransitionTime":"2026-02-16T13:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:19 crc kubenswrapper[4812]: E0216 13:33:19.846745 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:33:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:33:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:33:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T13:33:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4981c762-995f-430f-ab9d-bca26618d78a\\\",\\\"systemUUID\\\":\\\"a8093dd5-8447-4cfc-ac6f-47d191141ed0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:19Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:19 crc kubenswrapper[4812]: E0216 13:33:19.846908 4812 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.848644 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.848714 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.848732 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.848752 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.848769 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:19Z","lastTransitionTime":"2026-02-16T13:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.878813 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:33:19 crc kubenswrapper[4812]: E0216 13:33:19.878936 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.879143 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:33:19 crc kubenswrapper[4812]: E0216 13:33:19.879329 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.899998 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 01:27:40.775409076 +0000 UTC Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.951017 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.951059 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.951069 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.951084 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:19 crc kubenswrapper[4812]: I0216 13:33:19.951094 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:19Z","lastTransitionTime":"2026-02-16T13:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.053374 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.053405 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.053415 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.053429 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.053441 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:20Z","lastTransitionTime":"2026-02-16T13:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.155600 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.155649 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.155661 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.155679 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.155691 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:20Z","lastTransitionTime":"2026-02-16T13:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.258099 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.258165 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.258176 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.258190 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.258200 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:20Z","lastTransitionTime":"2026-02-16T13:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.360354 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.360405 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.360421 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.360452 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.360513 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:20Z","lastTransitionTime":"2026-02-16T13:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.463892 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.463933 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.463942 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.463973 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.463985 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:20Z","lastTransitionTime":"2026-02-16T13:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.566004 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.566269 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.566341 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.566424 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.566561 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:20Z","lastTransitionTime":"2026-02-16T13:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.669241 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.669274 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.669283 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.669298 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.669307 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:20Z","lastTransitionTime":"2026-02-16T13:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.771455 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.771492 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.771503 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.771520 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.771531 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:20Z","lastTransitionTime":"2026-02-16T13:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.874063 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.874296 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.874414 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.874521 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.874596 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:20Z","lastTransitionTime":"2026-02-16T13:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.878474 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.878601 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:33:20 crc kubenswrapper[4812]: E0216 13:33:20.878737 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:33:20 crc kubenswrapper[4812]: E0216 13:33:20.878600 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.901002 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 00:13:04.843572629 +0000 UTC Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.976734 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.976765 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.976773 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.976786 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:20 crc kubenswrapper[4812]: I0216 13:33:20.976796 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:20Z","lastTransitionTime":"2026-02-16T13:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.079596 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.079633 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.079643 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.079658 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.079701 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:21Z","lastTransitionTime":"2026-02-16T13:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.182085 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.182142 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.182154 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.182170 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.182182 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:21Z","lastTransitionTime":"2026-02-16T13:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.285545 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.285586 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.285596 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.285612 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.285624 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:21Z","lastTransitionTime":"2026-02-16T13:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.389224 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.389290 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.389302 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.389320 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.389332 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:21Z","lastTransitionTime":"2026-02-16T13:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.492088 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.492128 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.492139 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.492155 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.492166 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:21Z","lastTransitionTime":"2026-02-16T13:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.594732 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.594777 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.594789 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.594806 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.594817 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:21Z","lastTransitionTime":"2026-02-16T13:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.697894 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.697930 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.697947 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.697965 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.697977 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:21Z","lastTransitionTime":"2026-02-16T13:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.799898 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.799935 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.799946 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.799963 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.799973 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:21Z","lastTransitionTime":"2026-02-16T13:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.881521 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:33:21 crc kubenswrapper[4812]: E0216 13:33:21.881648 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.881724 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:33:21 crc kubenswrapper[4812]: E0216 13:33:21.881924 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.893426 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:21Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.901223 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 19:35:06.155499931 +0000 UTC Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.903056 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.903099 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.903114 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.903134 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.903145 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:21Z","lastTransitionTime":"2026-02-16T13:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.904333 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5w4kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f07f8fe-99f2-4f2e-b9f8-56841d756064\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6394bf9fe0ec8408dd8bfb0105dcc686f3a820850e75ce47db441f04d22c1c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5w4kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:21Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.916109 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4574e2db-75d7-4da6-bdf8-84a06c617799\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85905db3e100d71dfb29420eccfd9a129be4b9a6950a8e5e2915d7f8aabcc255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858f53f244902f66ee53409db591138aba707c545b1f7cc0da69a691be1e2138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f3e9624fe4d351638e9b45a1d575c06a3c9e7e12a77dcd8cb6a61996fe51fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cd9201d7dd9b07b8b725890b21f0f06ee3dd3ff839e3fc6c2f4b9d4d736c01e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cd9201d7dd9b07b8b725890b21f0f06ee3dd3ff839e3fc6c2f4b9d4d736c01e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:21Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.933663 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b29081-a34f-4671-85a3-e1bc2b16d37f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f92165c7178abd9e054861a9eb371353d050b922ba46c2a29027462832abff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 13:32:11.932704 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 13:32:11.933194 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 13:32:11.934094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1719573121/tls.crt::/tmp/serving-cert-1719573121/tls.key\\\\\\\"\\\\nI0216 13:32:12.377062 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 13:32:12.379783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 13:32:12.379801 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 13:32:12.379819 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 13:32:12.379825 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 13:32:12.383118 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 13:32:12.383163 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 13:32:12.383168 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383175 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 13:32:12.383180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 13:32:12.383182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 13:32:12.383185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 13:32:12.383188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 13:32:12.384888 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:21Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.945434 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78ca0949b586a8839b02c0e01bd92571f3c4eaa156cfcd58546607c30cdbdcf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:21Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.957862 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://367d830f42471f794953f9aa4cb979d9c3dac3ef0f2364661058c50cc1b0d055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f6091f13d04f859b29636375d3a27e07eeaee338f0da4a70b471f7f26250e8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:21Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.971328 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:21Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:21 crc kubenswrapper[4812]: I0216 13:33:21.982815 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p9b2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcdbcfde-ed95-4587-a92e-c7fa071b1b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a86c16e138e436ef3976c554ab3d70e39a7515f662e305558513a633db53142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2tjt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p9b2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:21Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.001796 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a67ca714-af04-4a76-8a28-54d47f66b1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9d4167a71f0c30ceeb980a74964baf5bdf78713153d1f3cac40fc4c79cbf69d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9d4167a71f0c30ceeb980a74964baf5bdf78713153d1f3cac40fc4c79cbf69d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T13:33:11Z\\\",\\\"message\\\":\\\"6 6888 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-5w4kf\\\\nI0216 13:33:11.698505 6888 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 13:33:11.698512 6888 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0216 13:33:11.698505 6888 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 13:33:11.698516 6888 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0216 13:33:11.698523 6888 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0216 13:33:11.698525 6888 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0216 13:33:11.698536 6888 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0216 13:33:11.698537 6888 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0216 13:33:11.698543 6888 ovn.go:134]\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:33:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pzksg_openshift-ovn-kubernetes(a67ca714-af04-4a76-8a28-54d47f66b1fa)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pzksg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:21Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.005281 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.005701 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.005804 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.005914 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.006000 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:22Z","lastTransitionTime":"2026-02-16T13:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.013191 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-szt79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2a1f0c6-cafa-4c67-a2ad-d6003e88613c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md4ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-md4ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-szt79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:22Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.025392 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2hhp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"934e533e-cc26-4770-af67-3dbcaa0dab5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:33:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63cd21781b359c2f631527e3e4e2649264ee4e64c5d3cc9842fb9106017565c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a9a856d7be88b316dbac827e3998da5df55019101f11f2e6e7d763a78c4a80b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T13:33:01Z\\\",\\\"message\\\":\\\"2026-02-16T13:32:16+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_30129742-2774-4438-ab09-c8d07b65190b\\\\n2026-02-16T13:32:16+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_30129742-2774-4438-ab09-c8d07b65190b to /host/opt/cni/bin/\\\\n2026-02-16T13:32:16Z [verbose] multus-daemon started\\\\n2026-02-16T13:32:16Z [verbose] Readiness Indicator file check\\\\n2026-02-16T13:33:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:33:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2xc4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2hhp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:22Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.036481 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c55e49a-a30d-4950-a690-c33d9f8a31e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ff320daab8e42aacfbe2cb2ef91f064cfd950840978ca9aaf3e11ea987dc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe6225f5f96d52b832cfbd8ecb30748dfcd053b9d95a512bfbe0c6c4aa7c0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjqx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6mn9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:22Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.049098 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:22Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.059116 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed25531-078f-4432-b260-2dc45d63eed7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7c33b5d95fa2865d325956c87e1024adf7bf0a40ef2e590b467f9cee892138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97e12743ac6a6f99e8efd74b1fcafc157b16793659394ef957ca99f5886fd478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e12743ac6a6f99e8efd74b1fcafc157b16793659394ef957ca99f5886fd478\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:31:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:22Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.071706 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc49bb6ecf7f6197ff2e3e76e7e51bb0842fec86477b85c62d1930170ac91d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:22Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.086112 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8g94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"145eec20-9328-4b99-b0ec-4870b6761385\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6366360c4a0ce6a9882aa517e034c45ef65a6ae8bf19f797f3ddaa99b8a76469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a8e92fe3cd2fbf149b89001a90fb08eb08d2bfd9dfd9076103a3188487c651\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8f2013ab148ade4b59119a7ac5ab163705e1f4a7ed745bea78a55fd99db56c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82fcd3748eea5d22335579283729ba9696999c4acd666d0fb827cd77f0f8c0ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1de358005f8e176261aaddefbb90f4637599a8acf98ae7584747183954e75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dfda8f01541b4b02582151d47ea6f39341f193914c39989d12b772f460b7adc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c726f4f1c3e737307485a18a20c2b934d59bd5263ffff0aa5dd1c4b46e6df73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T13:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T13:32:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4vr9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8g94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:22Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.096990 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gt4zb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e823c28d-cc96-469c-a794-fb12a7ae6172\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d34a9809d2ade6a3b1cb69520313bd3886db5c666762ca9ce993b5251331fa7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms9qd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f8618e97cfc7a7d6673243425c62cfd9633fca65005e96b63e924c303f7e5da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ms9qd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:32:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gt4zb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:22Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.108372 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.108419 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.108430 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.108466 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.108480 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:22Z","lastTransitionTime":"2026-02-16T13:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.108673 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5f43b73-bad3-4883-ad0b-4a8df6824248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:32:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T13:31:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1001938f74f193628f1faf5fada6a26836e7053b9f46656912df4c17ac11a1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5304f578aa4297d0e9e57669cb571d178fada6972553487f8f3e6a6a2422175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b778bd4c0e3d5d5158269f30b9646da76918d07d5a79e274b0b70ad1b4f38623\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94212f31a24fe0258d7c5241438e6ec1daf60b49bb87365fa851ae0a5628de1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T13:31:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T13:31:51Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T13:33:22Z is after 2025-08-24T17:21:41Z" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.211285 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.211327 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.211338 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.211353 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.211369 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:22Z","lastTransitionTime":"2026-02-16T13:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.314194 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.314239 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.314251 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.314268 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.314280 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:22Z","lastTransitionTime":"2026-02-16T13:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.416922 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.416993 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.417004 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.417017 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.417045 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:22Z","lastTransitionTime":"2026-02-16T13:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.519770 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.519811 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.519819 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.519835 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.519845 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:22Z","lastTransitionTime":"2026-02-16T13:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.621693 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.621734 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.621744 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.621757 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.621768 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:22Z","lastTransitionTime":"2026-02-16T13:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.724003 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.724047 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.724060 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.724076 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.724087 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:22Z","lastTransitionTime":"2026-02-16T13:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.826810 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.826857 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.826869 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.826885 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.826896 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:22Z","lastTransitionTime":"2026-02-16T13:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.878283 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:33:22 crc kubenswrapper[4812]: E0216 13:33:22.878425 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.878281 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:33:22 crc kubenswrapper[4812]: E0216 13:33:22.878691 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.902274 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 02:01:02.798320073 +0000 UTC Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.929219 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.929257 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.929265 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.929277 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:22 crc kubenswrapper[4812]: I0216 13:33:22.929288 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:22Z","lastTransitionTime":"2026-02-16T13:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.031355 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.031401 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.031410 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.031425 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.031435 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:23Z","lastTransitionTime":"2026-02-16T13:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.134106 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.134148 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.134157 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.134171 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.134184 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:23Z","lastTransitionTime":"2026-02-16T13:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.236281 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.236323 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.236333 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.236351 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.236360 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:23Z","lastTransitionTime":"2026-02-16T13:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.339417 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.339528 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.339547 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.339573 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.339591 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:23Z","lastTransitionTime":"2026-02-16T13:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.441936 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.442004 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.442028 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.442058 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.442081 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:23Z","lastTransitionTime":"2026-02-16T13:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.544590 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.544680 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.544696 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.544714 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.544728 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:23Z","lastTransitionTime":"2026-02-16T13:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.647986 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.648031 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.648041 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.648061 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.648073 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:23Z","lastTransitionTime":"2026-02-16T13:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.750879 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.750916 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.750924 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.750938 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.750947 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:23Z","lastTransitionTime":"2026-02-16T13:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.853711 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.853791 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.853820 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.853859 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.853893 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:23Z","lastTransitionTime":"2026-02-16T13:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.878725 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.878820 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:33:23 crc kubenswrapper[4812]: E0216 13:33:23.878915 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:33:23 crc kubenswrapper[4812]: E0216 13:33:23.879041 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.880022 4812 scope.go:117] "RemoveContainer" containerID="b9d4167a71f0c30ceeb980a74964baf5bdf78713153d1f3cac40fc4c79cbf69d" Feb 16 13:33:23 crc kubenswrapper[4812]: E0216 13:33:23.880254 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pzksg_openshift-ovn-kubernetes(a67ca714-af04-4a76-8a28-54d47f66b1fa)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.902486 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 16:33:54.487190827 +0000 UTC Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.956860 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.956931 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.956952 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.956980 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:23 crc kubenswrapper[4812]: I0216 13:33:23.957002 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:23Z","lastTransitionTime":"2026-02-16T13:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.059253 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.059306 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.059317 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.059336 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.059350 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:24Z","lastTransitionTime":"2026-02-16T13:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.162405 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.162485 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.162497 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.162518 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.162531 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:24Z","lastTransitionTime":"2026-02-16T13:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.265743 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.265788 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.265797 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.265812 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.265822 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:24Z","lastTransitionTime":"2026-02-16T13:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.368533 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.368574 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.368585 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.368600 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.368612 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:24Z","lastTransitionTime":"2026-02-16T13:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.471360 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.471441 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.471493 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.471522 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.471533 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:24Z","lastTransitionTime":"2026-02-16T13:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.573983 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.574025 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.574036 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.574054 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.574068 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:24Z","lastTransitionTime":"2026-02-16T13:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.676584 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.676644 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.676664 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.676680 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.676693 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:24Z","lastTransitionTime":"2026-02-16T13:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.779023 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.779066 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.779078 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.779093 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.779104 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:24Z","lastTransitionTime":"2026-02-16T13:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.878974 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:33:24 crc kubenswrapper[4812]: E0216 13:33:24.879111 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.878974 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:33:24 crc kubenswrapper[4812]: E0216 13:33:24.879319 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.881282 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.881325 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.881335 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.881350 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.881363 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:24Z","lastTransitionTime":"2026-02-16T13:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.903398 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 02:19:15.445534356 +0000 UTC Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.984126 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.984194 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.984205 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.984223 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:24 crc kubenswrapper[4812]: I0216 13:33:24.984235 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:24Z","lastTransitionTime":"2026-02-16T13:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.086923 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.086963 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.086975 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.086993 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.087004 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:25Z","lastTransitionTime":"2026-02-16T13:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.189334 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.189383 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.189408 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.189430 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.189450 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:25Z","lastTransitionTime":"2026-02-16T13:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.292024 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.292058 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.292066 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.292078 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.292086 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:25Z","lastTransitionTime":"2026-02-16T13:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.394026 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.394081 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.394094 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.394109 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.394120 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:25Z","lastTransitionTime":"2026-02-16T13:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.496275 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.496369 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.496389 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.496843 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.496856 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:25Z","lastTransitionTime":"2026-02-16T13:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.599650 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.599695 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.599711 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.599727 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.599741 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:25Z","lastTransitionTime":"2026-02-16T13:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.702227 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.702294 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.702341 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.702364 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.702376 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:25Z","lastTransitionTime":"2026-02-16T13:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.805078 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.805145 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.805156 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.805173 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.805186 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:25Z","lastTransitionTime":"2026-02-16T13:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.878975 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.879008 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:33:25 crc kubenswrapper[4812]: E0216 13:33:25.879243 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:33:25 crc kubenswrapper[4812]: E0216 13:33:25.879779 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.903817 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 01:31:11.5074068 +0000 UTC Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.907902 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.907955 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.908134 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.908158 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:25 crc kubenswrapper[4812]: I0216 13:33:25.908168 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:25Z","lastTransitionTime":"2026-02-16T13:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.011480 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.011721 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.011732 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.011747 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.011757 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:26Z","lastTransitionTime":"2026-02-16T13:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.114041 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.114085 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.114096 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.114120 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.114131 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:26Z","lastTransitionTime":"2026-02-16T13:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.217093 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.217247 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.217824 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.217865 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.217880 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:26Z","lastTransitionTime":"2026-02-16T13:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.320664 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.320710 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.320724 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.320743 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.320757 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:26Z","lastTransitionTime":"2026-02-16T13:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.422401 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.422474 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.422487 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.422506 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.422518 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:26Z","lastTransitionTime":"2026-02-16T13:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.525490 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.525529 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.525561 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.525575 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.525584 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:26Z","lastTransitionTime":"2026-02-16T13:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.627987 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.628029 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.628040 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.628055 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.628067 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:26Z","lastTransitionTime":"2026-02-16T13:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.730128 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.730188 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.730199 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.730214 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.730226 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:26Z","lastTransitionTime":"2026-02-16T13:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.832627 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.832669 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.832679 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.832694 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.832705 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:26Z","lastTransitionTime":"2026-02-16T13:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.877935 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.877969 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:33:26 crc kubenswrapper[4812]: E0216 13:33:26.878056 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:33:26 crc kubenswrapper[4812]: E0216 13:33:26.878231 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.904313 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 22:37:33.992093093 +0000 UTC Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.934631 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.934681 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.934696 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.934713 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:26 crc kubenswrapper[4812]: I0216 13:33:26.934728 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:26Z","lastTransitionTime":"2026-02-16T13:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.037884 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.037915 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.037923 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.037937 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.037945 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:27Z","lastTransitionTime":"2026-02-16T13:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.140754 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.140789 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.140801 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.140817 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.140829 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:27Z","lastTransitionTime":"2026-02-16T13:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.243224 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.243269 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.243278 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.243295 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.243308 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:27Z","lastTransitionTime":"2026-02-16T13:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.345599 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.345637 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.345647 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.345660 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.345670 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:27Z","lastTransitionTime":"2026-02-16T13:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.448074 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.448497 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.448538 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.448559 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.448571 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:27Z","lastTransitionTime":"2026-02-16T13:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.550620 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.550664 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.550675 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.550691 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.550704 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:27Z","lastTransitionTime":"2026-02-16T13:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.652853 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.652913 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.652929 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.652951 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.652965 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:27Z","lastTransitionTime":"2026-02-16T13:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.755871 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.755928 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.755940 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.755953 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.755963 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:27Z","lastTransitionTime":"2026-02-16T13:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.858151 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.858484 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.858570 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.858677 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.858761 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:27Z","lastTransitionTime":"2026-02-16T13:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.878618 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:33:27 crc kubenswrapper[4812]: E0216 13:33:27.878759 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.878816 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:33:27 crc kubenswrapper[4812]: E0216 13:33:27.878978 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.905158 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 12:37:22.806959288 +0000 UTC Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.961497 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.961542 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.961553 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.961571 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:27 crc kubenswrapper[4812]: I0216 13:33:27.961583 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:27Z","lastTransitionTime":"2026-02-16T13:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.064466 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.064516 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.064530 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.064549 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.064561 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:28Z","lastTransitionTime":"2026-02-16T13:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.167231 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.167270 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.167281 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.167296 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.167309 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:28Z","lastTransitionTime":"2026-02-16T13:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.270175 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.270216 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.270224 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.270238 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.270246 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:28Z","lastTransitionTime":"2026-02-16T13:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.372022 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.372049 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.372057 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.372069 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.372078 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:28Z","lastTransitionTime":"2026-02-16T13:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.475019 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.475415 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.475609 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.475792 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.475927 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:28Z","lastTransitionTime":"2026-02-16T13:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.578122 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.578160 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.578169 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.578184 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.578194 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:28Z","lastTransitionTime":"2026-02-16T13:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.680620 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.680654 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.680663 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.680681 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.680697 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:28Z","lastTransitionTime":"2026-02-16T13:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.782997 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.783073 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.783091 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.783116 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.783140 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:28Z","lastTransitionTime":"2026-02-16T13:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.878160 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.878195 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:33:28 crc kubenswrapper[4812]: E0216 13:33:28.878352 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:33:28 crc kubenswrapper[4812]: E0216 13:33:28.878482 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.885538 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.885603 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.885619 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.885641 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.885656 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:28Z","lastTransitionTime":"2026-02-16T13:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.906079 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 02:52:22.565177375 +0000 UTC Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.988903 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.988960 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.988977 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.988999 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:28 crc kubenswrapper[4812]: I0216 13:33:28.989017 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:28Z","lastTransitionTime":"2026-02-16T13:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.091127 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.091220 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.091238 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.091261 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.091277 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:29Z","lastTransitionTime":"2026-02-16T13:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.193127 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.193192 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.193204 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.193222 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.193234 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:29Z","lastTransitionTime":"2026-02-16T13:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.296074 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.296106 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.296115 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.296128 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.296137 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:29Z","lastTransitionTime":"2026-02-16T13:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.399046 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.399117 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.399128 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.399145 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.399157 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:29Z","lastTransitionTime":"2026-02-16T13:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.502057 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.502128 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.502148 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.502172 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.502191 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:29Z","lastTransitionTime":"2026-02-16T13:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.605157 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.605209 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.605225 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.605247 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.605263 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:29Z","lastTransitionTime":"2026-02-16T13:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.707354 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.707387 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.707396 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.707410 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.707420 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:29Z","lastTransitionTime":"2026-02-16T13:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.810009 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.810054 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.810063 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.810079 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.810088 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:29Z","lastTransitionTime":"2026-02-16T13:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.878974 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:33:29 crc kubenswrapper[4812]: E0216 13:33:29.879142 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.878974 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:33:29 crc kubenswrapper[4812]: E0216 13:33:29.879399 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.906247 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 04:28:58.410287053 +0000 UTC Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.912300 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.912356 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.912369 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.912407 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:29 crc kubenswrapper[4812]: I0216 13:33:29.912421 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:29Z","lastTransitionTime":"2026-02-16T13:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.014991 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.015040 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.015053 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.015069 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.015080 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:30Z","lastTransitionTime":"2026-02-16T13:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.027378 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.027484 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.027502 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.027523 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.027538 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T13:33:30Z","lastTransitionTime":"2026-02-16T13:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.084963 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-jqdzw"] Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.085336 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jqdzw" Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.087216 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.088072 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.089086 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.090335 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.119682 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-2hhp5" podStartSLOduration=77.119664458 podStartE2EDuration="1m17.119664458s" podCreationTimestamp="2026-02-16 13:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:33:30.107129192 +0000 UTC m=+99.171459893" watchObservedRunningTime="2026-02-16 13:33:30.119664458 +0000 UTC m=+99.183995159" Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.120043 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podStartSLOduration=77.120036279 podStartE2EDuration="1m17.120036279s" podCreationTimestamp="2026-02-16 13:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:33:30.119417551 +0000 UTC m=+99.183748292" watchObservedRunningTime="2026-02-16 13:33:30.120036279 +0000 UTC m=+99.184366990" Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.147674 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/44479097-9d42-4a0f-9982-ad6a87565c70-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-jqdzw\" (UID: \"44479097-9d42-4a0f-9982-ad6a87565c70\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jqdzw" Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.147721 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44479097-9d42-4a0f-9982-ad6a87565c70-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-jqdzw\" (UID: \"44479097-9d42-4a0f-9982-ad6a87565c70\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jqdzw" Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.147755 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/44479097-9d42-4a0f-9982-ad6a87565c70-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-jqdzw\" (UID: \"44479097-9d42-4a0f-9982-ad6a87565c70\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jqdzw" Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.147778 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/44479097-9d42-4a0f-9982-ad6a87565c70-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-jqdzw\" (UID: \"44479097-9d42-4a0f-9982-ad6a87565c70\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jqdzw" Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.147803 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/44479097-9d42-4a0f-9982-ad6a87565c70-service-ca\") pod \"cluster-version-operator-5c965bbfc6-jqdzw\" (UID: \"44479097-9d42-4a0f-9982-ad6a87565c70\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jqdzw" Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.161342 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=26.161323762 podStartE2EDuration="26.161323762s" podCreationTimestamp="2026-02-16 13:33:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:33:30.161150248 +0000 UTC m=+99.225480949" watchObservedRunningTime="2026-02-16 13:33:30.161323762 +0000 UTC m=+99.225654453" Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.191755 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-q8g94" podStartSLOduration=77.191734207 podStartE2EDuration="1m17.191734207s" podCreationTimestamp="2026-02-16 13:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:33:30.189258757 +0000 UTC m=+99.253589458" watchObservedRunningTime="2026-02-16 13:33:30.191734207 +0000 UTC m=+99.256064908" Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.201602 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gt4zb" podStartSLOduration=76.201584727 podStartE2EDuration="1m16.201584727s" podCreationTimestamp="2026-02-16 13:32:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:33:30.201190386 +0000 UTC m=+99.265521097" watchObservedRunningTime="2026-02-16 13:33:30.201584727 +0000 UTC m=+99.265915428" Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.229977 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=76.229954893 podStartE2EDuration="1m16.229954893s" podCreationTimestamp="2026-02-16 13:32:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:33:30.216739448 +0000 UTC m=+99.281070159" watchObservedRunningTime="2026-02-16 13:33:30.229954893 +0000 UTC m=+99.294285594" Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.246340 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-5w4kf" podStartSLOduration=77.246314158 podStartE2EDuration="1m17.246314158s" podCreationTimestamp="2026-02-16 13:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:33:30.245784773 +0000 UTC m=+99.310115474" watchObservedRunningTime="2026-02-16 13:33:30.246314158 +0000 UTC m=+99.310644859" Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.249107 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/44479097-9d42-4a0f-9982-ad6a87565c70-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-jqdzw\" (UID: \"44479097-9d42-4a0f-9982-ad6a87565c70\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jqdzw" Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.249165 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/44479097-9d42-4a0f-9982-ad6a87565c70-service-ca\") pod \"cluster-version-operator-5c965bbfc6-jqdzw\" (UID: \"44479097-9d42-4a0f-9982-ad6a87565c70\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jqdzw" Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.249214 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/44479097-9d42-4a0f-9982-ad6a87565c70-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-jqdzw\" (UID: \"44479097-9d42-4a0f-9982-ad6a87565c70\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jqdzw" Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.249253 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44479097-9d42-4a0f-9982-ad6a87565c70-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-jqdzw\" (UID: \"44479097-9d42-4a0f-9982-ad6a87565c70\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jqdzw" Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.249293 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/44479097-9d42-4a0f-9982-ad6a87565c70-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-jqdzw\" (UID: \"44479097-9d42-4a0f-9982-ad6a87565c70\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jqdzw" Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.249292 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/44479097-9d42-4a0f-9982-ad6a87565c70-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-jqdzw\" (UID: \"44479097-9d42-4a0f-9982-ad6a87565c70\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jqdzw" Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.249369 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/44479097-9d42-4a0f-9982-ad6a87565c70-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-jqdzw\" (UID: \"44479097-9d42-4a0f-9982-ad6a87565c70\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jqdzw" Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.250333 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/44479097-9d42-4a0f-9982-ad6a87565c70-service-ca\") pod \"cluster-version-operator-5c965bbfc6-jqdzw\" (UID: \"44479097-9d42-4a0f-9982-ad6a87565c70\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jqdzw" Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.254984 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44479097-9d42-4a0f-9982-ad6a87565c70-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-jqdzw\" (UID: \"44479097-9d42-4a0f-9982-ad6a87565c70\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jqdzw" Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.271888 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/44479097-9d42-4a0f-9982-ad6a87565c70-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-jqdzw\" (UID: \"44479097-9d42-4a0f-9982-ad6a87565c70\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jqdzw" Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.277193 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=46.277175716 podStartE2EDuration="46.277175716s" podCreationTimestamp="2026-02-16 13:32:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:33:30.259141083 +0000 UTC m=+99.323471784" watchObservedRunningTime="2026-02-16 13:33:30.277175716 +0000 UTC m=+99.341506417" Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.288156 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=77.288138347 podStartE2EDuration="1m17.288138347s" podCreationTimestamp="2026-02-16 13:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:33:30.276690722 +0000 UTC m=+99.341021423" watchObservedRunningTime="2026-02-16 13:33:30.288138347 +0000 UTC m=+99.352469048" Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.325911 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-p9b2s" podStartSLOduration=77.325896651 podStartE2EDuration="1m17.325896651s" podCreationTimestamp="2026-02-16 13:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:33:30.325764237 +0000 UTC m=+99.390094958" watchObservedRunningTime="2026-02-16 13:33:30.325896651 +0000 UTC m=+99.390227342" Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.404878 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jqdzw" Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.448965 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jqdzw" event={"ID":"44479097-9d42-4a0f-9982-ad6a87565c70","Type":"ContainerStarted","Data":"5a3dc51d40716aaef05ba636da5eb7754a38a5f516e2678219c5ee6d84cf4674"} Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.878935 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:33:30 crc kubenswrapper[4812]: E0216 13:33:30.879636 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.878982 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:33:30 crc kubenswrapper[4812]: E0216 13:33:30.879871 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.907282 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 13:39:56.733683118 +0000 UTC Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.908123 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 16 13:33:30 crc kubenswrapper[4812]: I0216 13:33:30.917899 4812 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 16 13:33:31 crc kubenswrapper[4812]: I0216 13:33:31.453638 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jqdzw" event={"ID":"44479097-9d42-4a0f-9982-ad6a87565c70","Type":"ContainerStarted","Data":"83c41f0378cd2a6f4be1bbc58eb7ed7f56889491fb085d0788a00d076801a951"} Feb 16 13:33:31 crc kubenswrapper[4812]: I0216 13:33:31.863537 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d2a1f0c6-cafa-4c67-a2ad-d6003e88613c-metrics-certs\") pod \"network-metrics-daemon-szt79\" (UID: \"d2a1f0c6-cafa-4c67-a2ad-d6003e88613c\") " pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:33:31 crc kubenswrapper[4812]: E0216 13:33:31.863737 4812 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 13:33:31 crc kubenswrapper[4812]: E0216 13:33:31.863847 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2a1f0c6-cafa-4c67-a2ad-d6003e88613c-metrics-certs podName:d2a1f0c6-cafa-4c67-a2ad-d6003e88613c nodeName:}" failed. No retries permitted until 2026-02-16 13:34:35.863813797 +0000 UTC m=+164.928144538 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d2a1f0c6-cafa-4c67-a2ad-d6003e88613c-metrics-certs") pod "network-metrics-daemon-szt79" (UID: "d2a1f0c6-cafa-4c67-a2ad-d6003e88613c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 13:33:31 crc kubenswrapper[4812]: I0216 13:33:31.878422 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:33:31 crc kubenswrapper[4812]: I0216 13:33:31.879530 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:33:31 crc kubenswrapper[4812]: E0216 13:33:31.879591 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:33:31 crc kubenswrapper[4812]: E0216 13:33:31.879717 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:33:31 crc kubenswrapper[4812]: I0216 13:33:31.894305 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jqdzw" podStartSLOduration=78.894290453 podStartE2EDuration="1m18.894290453s" podCreationTimestamp="2026-02-16 13:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:33:31.467172342 +0000 UTC m=+100.531503053" watchObservedRunningTime="2026-02-16 13:33:31.894290453 +0000 UTC m=+100.958621164" Feb 16 13:33:31 crc kubenswrapper[4812]: I0216 13:33:31.894858 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 16 13:33:32 crc kubenswrapper[4812]: I0216 13:33:32.878553 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:33:32 crc kubenswrapper[4812]: I0216 13:33:32.878632 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:33:32 crc kubenswrapper[4812]: E0216 13:33:32.878777 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:33:32 crc kubenswrapper[4812]: E0216 13:33:32.878863 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:33:33 crc kubenswrapper[4812]: I0216 13:33:33.878409 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:33:33 crc kubenswrapper[4812]: I0216 13:33:33.878560 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:33:33 crc kubenswrapper[4812]: E0216 13:33:33.878785 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:33:33 crc kubenswrapper[4812]: E0216 13:33:33.878851 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:33:34 crc kubenswrapper[4812]: I0216 13:33:34.878724 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:33:34 crc kubenswrapper[4812]: I0216 13:33:34.878764 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:33:34 crc kubenswrapper[4812]: E0216 13:33:34.878938 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:33:34 crc kubenswrapper[4812]: E0216 13:33:34.879286 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:33:35 crc kubenswrapper[4812]: I0216 13:33:35.878295 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:33:35 crc kubenswrapper[4812]: I0216 13:33:35.878310 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:33:35 crc kubenswrapper[4812]: E0216 13:33:35.878538 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:33:35 crc kubenswrapper[4812]: E0216 13:33:35.878627 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:33:36 crc kubenswrapper[4812]: I0216 13:33:36.879013 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:33:36 crc kubenswrapper[4812]: I0216 13:33:36.879313 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:33:36 crc kubenswrapper[4812]: E0216 13:33:36.879598 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:33:36 crc kubenswrapper[4812]: E0216 13:33:36.879688 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:33:36 crc kubenswrapper[4812]: I0216 13:33:36.880819 4812 scope.go:117] "RemoveContainer" containerID="b9d4167a71f0c30ceeb980a74964baf5bdf78713153d1f3cac40fc4c79cbf69d" Feb 16 13:33:36 crc kubenswrapper[4812]: E0216 13:33:36.881046 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pzksg_openshift-ovn-kubernetes(a67ca714-af04-4a76-8a28-54d47f66b1fa)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" Feb 16 13:33:37 crc kubenswrapper[4812]: I0216 13:33:37.878294 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:33:37 crc kubenswrapper[4812]: I0216 13:33:37.878477 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:33:37 crc kubenswrapper[4812]: E0216 13:33:37.878600 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:33:37 crc kubenswrapper[4812]: E0216 13:33:37.878769 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:33:38 crc kubenswrapper[4812]: I0216 13:33:38.878963 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:33:38 crc kubenswrapper[4812]: I0216 13:33:38.878979 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:33:38 crc kubenswrapper[4812]: E0216 13:33:38.879103 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:33:38 crc kubenswrapper[4812]: E0216 13:33:38.879338 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:33:39 crc kubenswrapper[4812]: I0216 13:33:39.878380 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:33:39 crc kubenswrapper[4812]: I0216 13:33:39.878534 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:33:39 crc kubenswrapper[4812]: E0216 13:33:39.878890 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:33:39 crc kubenswrapper[4812]: E0216 13:33:39.878996 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:33:40 crc kubenswrapper[4812]: I0216 13:33:40.878290 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:33:40 crc kubenswrapper[4812]: I0216 13:33:40.878291 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:33:40 crc kubenswrapper[4812]: E0216 13:33:40.878460 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:33:40 crc kubenswrapper[4812]: E0216 13:33:40.878734 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:33:41 crc kubenswrapper[4812]: I0216 13:33:41.878664 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:33:41 crc kubenswrapper[4812]: I0216 13:33:41.878775 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:33:41 crc kubenswrapper[4812]: E0216 13:33:41.880220 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:33:41 crc kubenswrapper[4812]: E0216 13:33:41.880582 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:33:41 crc kubenswrapper[4812]: I0216 13:33:41.923867 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=10.923847453 podStartE2EDuration="10.923847453s" podCreationTimestamp="2026-02-16 13:33:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:33:41.9188254 +0000 UTC m=+110.983156121" watchObservedRunningTime="2026-02-16 13:33:41.923847453 +0000 UTC m=+110.988178154" Feb 16 13:33:42 crc kubenswrapper[4812]: I0216 13:33:42.878573 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:33:42 crc kubenswrapper[4812]: I0216 13:33:42.878607 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:33:42 crc kubenswrapper[4812]: E0216 13:33:42.878750 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:33:42 crc kubenswrapper[4812]: E0216 13:33:42.878815 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:33:43 crc kubenswrapper[4812]: I0216 13:33:43.878357 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:33:43 crc kubenswrapper[4812]: I0216 13:33:43.878515 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:33:43 crc kubenswrapper[4812]: E0216 13:33:43.878646 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:33:43 crc kubenswrapper[4812]: E0216 13:33:43.878817 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:33:44 crc kubenswrapper[4812]: I0216 13:33:44.878267 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:33:44 crc kubenswrapper[4812]: I0216 13:33:44.878290 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:33:44 crc kubenswrapper[4812]: E0216 13:33:44.878418 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:33:44 crc kubenswrapper[4812]: E0216 13:33:44.878538 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:33:45 crc kubenswrapper[4812]: I0216 13:33:45.878186 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:33:45 crc kubenswrapper[4812]: I0216 13:33:45.878185 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:33:45 crc kubenswrapper[4812]: E0216 13:33:45.878419 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:33:45 crc kubenswrapper[4812]: E0216 13:33:45.878600 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:33:46 crc kubenswrapper[4812]: I0216 13:33:46.877987 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:33:46 crc kubenswrapper[4812]: E0216 13:33:46.878123 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:33:46 crc kubenswrapper[4812]: I0216 13:33:46.878602 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:33:46 crc kubenswrapper[4812]: E0216 13:33:46.878678 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:33:47 crc kubenswrapper[4812]: I0216 13:33:47.878720 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:33:47 crc kubenswrapper[4812]: I0216 13:33:47.878720 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:33:47 crc kubenswrapper[4812]: E0216 13:33:47.878860 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:33:47 crc kubenswrapper[4812]: E0216 13:33:47.878942 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:33:48 crc kubenswrapper[4812]: I0216 13:33:48.511427 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2hhp5_934e533e-cc26-4770-af67-3dbcaa0dab5b/kube-multus/1.log" Feb 16 13:33:48 crc kubenswrapper[4812]: I0216 13:33:48.511982 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2hhp5_934e533e-cc26-4770-af67-3dbcaa0dab5b/kube-multus/0.log" Feb 16 13:33:48 crc kubenswrapper[4812]: I0216 13:33:48.512054 4812 generic.go:334] "Generic (PLEG): container finished" podID="934e533e-cc26-4770-af67-3dbcaa0dab5b" containerID="63cd21781b359c2f631527e3e4e2649264ee4e64c5d3cc9842fb9106017565c9" exitCode=1 Feb 16 13:33:48 crc kubenswrapper[4812]: I0216 13:33:48.512121 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2hhp5" event={"ID":"934e533e-cc26-4770-af67-3dbcaa0dab5b","Type":"ContainerDied","Data":"63cd21781b359c2f631527e3e4e2649264ee4e64c5d3cc9842fb9106017565c9"} Feb 16 13:33:48 crc kubenswrapper[4812]: I0216 13:33:48.512241 4812 scope.go:117] "RemoveContainer" containerID="8a9a856d7be88b316dbac827e3998da5df55019101f11f2e6e7d763a78c4a80b" Feb 16 13:33:48 crc kubenswrapper[4812]: I0216 13:33:48.512667 4812 scope.go:117] "RemoveContainer" containerID="63cd21781b359c2f631527e3e4e2649264ee4e64c5d3cc9842fb9106017565c9" Feb 16 13:33:48 crc kubenswrapper[4812]: E0216 13:33:48.512845 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-2hhp5_openshift-multus(934e533e-cc26-4770-af67-3dbcaa0dab5b)\"" pod="openshift-multus/multus-2hhp5" podUID="934e533e-cc26-4770-af67-3dbcaa0dab5b" Feb 16 13:33:48 crc kubenswrapper[4812]: I0216 13:33:48.878238 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:33:48 crc kubenswrapper[4812]: I0216 13:33:48.878213 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:33:48 crc kubenswrapper[4812]: E0216 13:33:48.878667 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:33:48 crc kubenswrapper[4812]: E0216 13:33:48.878849 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:33:49 crc kubenswrapper[4812]: I0216 13:33:49.516336 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2hhp5_934e533e-cc26-4770-af67-3dbcaa0dab5b/kube-multus/1.log" Feb 16 13:33:49 crc kubenswrapper[4812]: I0216 13:33:49.879017 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:33:49 crc kubenswrapper[4812]: I0216 13:33:49.880090 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:33:49 crc kubenswrapper[4812]: E0216 13:33:49.880174 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:33:49 crc kubenswrapper[4812]: E0216 13:33:49.880369 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:33:50 crc kubenswrapper[4812]: I0216 13:33:50.878220 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:33:50 crc kubenswrapper[4812]: I0216 13:33:50.878179 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:33:50 crc kubenswrapper[4812]: E0216 13:33:50.878426 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:33:50 crc kubenswrapper[4812]: E0216 13:33:50.879122 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:33:50 crc kubenswrapper[4812]: I0216 13:33:50.881258 4812 scope.go:117] "RemoveContainer" containerID="b9d4167a71f0c30ceeb980a74964baf5bdf78713153d1f3cac40fc4c79cbf69d" Feb 16 13:33:50 crc kubenswrapper[4812]: E0216 13:33:50.881747 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pzksg_openshift-ovn-kubernetes(a67ca714-af04-4a76-8a28-54d47f66b1fa)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" Feb 16 13:33:51 crc kubenswrapper[4812]: I0216 13:33:51.877917 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:33:51 crc kubenswrapper[4812]: E0216 13:33:51.879676 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:33:51 crc kubenswrapper[4812]: I0216 13:33:51.879737 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:33:51 crc kubenswrapper[4812]: E0216 13:33:51.880302 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:33:51 crc kubenswrapper[4812]: E0216 13:33:51.918739 4812 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 16 13:33:51 crc kubenswrapper[4812]: E0216 13:33:51.994572 4812 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 13:33:52 crc kubenswrapper[4812]: I0216 13:33:52.878718 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:33:52 crc kubenswrapper[4812]: I0216 13:33:52.878806 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:33:52 crc kubenswrapper[4812]: E0216 13:33:52.878882 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:33:52 crc kubenswrapper[4812]: E0216 13:33:52.878953 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:33:53 crc kubenswrapper[4812]: I0216 13:33:53.878351 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:33:53 crc kubenswrapper[4812]: I0216 13:33:53.878505 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:33:53 crc kubenswrapper[4812]: E0216 13:33:53.879790 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:33:53 crc kubenswrapper[4812]: E0216 13:33:53.879938 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:33:54 crc kubenswrapper[4812]: I0216 13:33:54.878161 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:33:54 crc kubenswrapper[4812]: I0216 13:33:54.878169 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:33:54 crc kubenswrapper[4812]: E0216 13:33:54.878641 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:33:54 crc kubenswrapper[4812]: E0216 13:33:54.878825 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:33:55 crc kubenswrapper[4812]: I0216 13:33:55.878189 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:33:55 crc kubenswrapper[4812]: E0216 13:33:55.878633 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:33:55 crc kubenswrapper[4812]: I0216 13:33:55.878357 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:33:55 crc kubenswrapper[4812]: E0216 13:33:55.878711 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:33:56 crc kubenswrapper[4812]: I0216 13:33:56.878344 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:33:56 crc kubenswrapper[4812]: I0216 13:33:56.878345 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:33:56 crc kubenswrapper[4812]: E0216 13:33:56.878540 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:33:56 crc kubenswrapper[4812]: E0216 13:33:56.878647 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:33:56 crc kubenswrapper[4812]: E0216 13:33:56.996195 4812 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 13:33:57 crc kubenswrapper[4812]: I0216 13:33:57.878587 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:33:57 crc kubenswrapper[4812]: I0216 13:33:57.878635 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:33:57 crc kubenswrapper[4812]: E0216 13:33:57.878777 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:33:57 crc kubenswrapper[4812]: E0216 13:33:57.878880 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:33:58 crc kubenswrapper[4812]: I0216 13:33:58.878435 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:33:58 crc kubenswrapper[4812]: I0216 13:33:58.878435 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:33:58 crc kubenswrapper[4812]: E0216 13:33:58.878615 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:33:58 crc kubenswrapper[4812]: E0216 13:33:58.878780 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:33:59 crc kubenswrapper[4812]: I0216 13:33:59.879075 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:33:59 crc kubenswrapper[4812]: E0216 13:33:59.879513 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:33:59 crc kubenswrapper[4812]: I0216 13:33:59.879629 4812 scope.go:117] "RemoveContainer" containerID="63cd21781b359c2f631527e3e4e2649264ee4e64c5d3cc9842fb9106017565c9" Feb 16 13:33:59 crc kubenswrapper[4812]: I0216 13:33:59.879919 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:33:59 crc kubenswrapper[4812]: E0216 13:33:59.880064 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:34:00 crc kubenswrapper[4812]: I0216 13:34:00.555955 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2hhp5_934e533e-cc26-4770-af67-3dbcaa0dab5b/kube-multus/1.log" Feb 16 13:34:00 crc kubenswrapper[4812]: I0216 13:34:00.556009 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2hhp5" event={"ID":"934e533e-cc26-4770-af67-3dbcaa0dab5b","Type":"ContainerStarted","Data":"2d41f8ea13f87efbf94b6b39515e60a7f967c77a0430c1428f73fb0fd196cb4b"} Feb 16 13:34:00 crc kubenswrapper[4812]: I0216 13:34:00.877979 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:34:00 crc kubenswrapper[4812]: I0216 13:34:00.877979 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:34:00 crc kubenswrapper[4812]: E0216 13:34:00.878616 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:34:00 crc kubenswrapper[4812]: E0216 13:34:00.878736 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:34:01 crc kubenswrapper[4812]: I0216 13:34:01.878582 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:34:01 crc kubenswrapper[4812]: I0216 13:34:01.878610 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:34:01 crc kubenswrapper[4812]: E0216 13:34:01.880504 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:34:01 crc kubenswrapper[4812]: E0216 13:34:01.880579 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:34:01 crc kubenswrapper[4812]: E0216 13:34:01.997094 4812 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 13:34:02 crc kubenswrapper[4812]: I0216 13:34:02.878022 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:34:02 crc kubenswrapper[4812]: E0216 13:34:02.878150 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:34:02 crc kubenswrapper[4812]: I0216 13:34:02.878022 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:34:02 crc kubenswrapper[4812]: E0216 13:34:02.878723 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:34:02 crc kubenswrapper[4812]: I0216 13:34:02.879051 4812 scope.go:117] "RemoveContainer" containerID="b9d4167a71f0c30ceeb980a74964baf5bdf78713153d1f3cac40fc4c79cbf69d" Feb 16 13:34:03 crc kubenswrapper[4812]: I0216 13:34:03.567913 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pzksg_a67ca714-af04-4a76-8a28-54d47f66b1fa/ovnkube-controller/3.log" Feb 16 13:34:03 crc kubenswrapper[4812]: I0216 13:34:03.570431 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" event={"ID":"a67ca714-af04-4a76-8a28-54d47f66b1fa","Type":"ContainerStarted","Data":"3fa6eac03530fa7a720c0109e70502bc334e31e1ff8f9ac66e3f768751a44622"} Feb 16 13:34:03 crc kubenswrapper[4812]: I0216 13:34:03.570858 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:34:03 crc kubenswrapper[4812]: I0216 13:34:03.878724 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:34:03 crc kubenswrapper[4812]: I0216 13:34:03.878797 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:34:03 crc kubenswrapper[4812]: E0216 13:34:03.878833 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:34:03 crc kubenswrapper[4812]: E0216 13:34:03.878874 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:34:03 crc kubenswrapper[4812]: I0216 13:34:03.910741 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" podStartSLOduration=110.910718671 podStartE2EDuration="1m50.910718671s" podCreationTimestamp="2026-02-16 13:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:03.596177867 +0000 UTC m=+132.660508598" watchObservedRunningTime="2026-02-16 13:34:03.910718671 +0000 UTC m=+132.975049382" Feb 16 13:34:03 crc kubenswrapper[4812]: I0216 13:34:03.911092 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-szt79"] Feb 16 13:34:04 crc kubenswrapper[4812]: I0216 13:34:04.573719 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:34:04 crc kubenswrapper[4812]: E0216 13:34:04.573885 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:34:04 crc kubenswrapper[4812]: I0216 13:34:04.877932 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:34:04 crc kubenswrapper[4812]: I0216 13:34:04.878043 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:34:04 crc kubenswrapper[4812]: E0216 13:34:04.878089 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:34:04 crc kubenswrapper[4812]: E0216 13:34:04.878184 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:34:05 crc kubenswrapper[4812]: I0216 13:34:05.879059 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:34:05 crc kubenswrapper[4812]: E0216 13:34:05.879800 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 13:34:06 crc kubenswrapper[4812]: I0216 13:34:06.878432 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:34:06 crc kubenswrapper[4812]: I0216 13:34:06.878618 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:34:06 crc kubenswrapper[4812]: I0216 13:34:06.878483 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:34:06 crc kubenswrapper[4812]: E0216 13:34:06.878721 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-szt79" podUID="d2a1f0c6-cafa-4c67-a2ad-d6003e88613c" Feb 16 13:34:06 crc kubenswrapper[4812]: E0216 13:34:06.878843 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 13:34:06 crc kubenswrapper[4812]: E0216 13:34:06.879008 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 13:34:07 crc kubenswrapper[4812]: I0216 13:34:07.878858 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:34:07 crc kubenswrapper[4812]: I0216 13:34:07.881202 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 13:34:07 crc kubenswrapper[4812]: I0216 13:34:07.882314 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 13:34:08 crc kubenswrapper[4812]: I0216 13:34:08.878740 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:34:08 crc kubenswrapper[4812]: I0216 13:34:08.878776 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:34:08 crc kubenswrapper[4812]: I0216 13:34:08.878735 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:34:08 crc kubenswrapper[4812]: I0216 13:34:08.881656 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 16 13:34:08 crc kubenswrapper[4812]: I0216 13:34:08.881736 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 16 13:34:08 crc kubenswrapper[4812]: I0216 13:34:08.881920 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 16 13:34:08 crc kubenswrapper[4812]: I0216 13:34:08.881957 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 13:34:10 crc kubenswrapper[4812]: I0216 13:34:10.952632 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.000844 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-72lrh"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.001346 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-72lrh" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.001924 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-wc6pn"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.002471 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.003647 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4cx9t"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.004027 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-4cx9t" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.004403 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5sss2"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.010137 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5sss2" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.011664 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.026185 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.026229 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-942n4"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.026326 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.026503 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.026689 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.026799 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-942n4" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.026921 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.027052 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8ngmk"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.027308 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8ngmk" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.027435 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.026930 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.027647 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.027698 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.027768 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.027698 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.027812 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.027779 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.027923 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.028034 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.028422 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.028477 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.028584 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.031811 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.036131 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9ttfl"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.036752 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.036815 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.036828 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gb9bh"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.037022 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.037047 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ttfl" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.037110 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.037196 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gb9bh" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.037217 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.037598 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-nplvk"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.037942 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-nplvk" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.038086 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.038257 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.038359 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.038382 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.038403 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.038369 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.038508 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.038527 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.038542 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.038609 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.038628 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.038655 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.038680 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.038903 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.039097 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.040164 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tnqj2"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.040657 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.040892 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tnqj2" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.041392 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8mcbf"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.041800 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8mcbf" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.042310 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-7wvg2"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.042697 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-7wvg2" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.045337 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4mg2p"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.046081 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.046953 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.047070 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.047133 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.047195 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.046959 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-tpgqc"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.047207 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.047599 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-tpgqc" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.047808 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.047928 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.047970 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v2s8n"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.048041 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.048165 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.048224 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.048180 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.048329 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.049982 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-sv88f"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.055352 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4cx9t"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.055397 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2f89v"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.055764 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-plrxx"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.055898 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-sv88f" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.056184 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-plrxx" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.056771 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.057895 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v2s8n" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.078422 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.078642 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.078776 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.079118 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.079311 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.079658 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.079862 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.080202 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.080388 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.080593 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h69cg"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.080656 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.080841 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.081015 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.081190 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.081418 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.081647 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.081422 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h69cg" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.082249 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.083885 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.089838 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.102597 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.104377 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.106047 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.106282 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.106392 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.106516 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.106604 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.107795 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-w525k"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.107856 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.108466 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-gjtcl"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.109019 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j84fb"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.109432 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j84fb" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.110040 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.110282 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.110429 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.110624 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.110779 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.110930 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.111145 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.111292 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.111429 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.111899 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wknkb"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.112299 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v7f6l"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.112606 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-ll6fj"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.113069 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ll6fj" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.113647 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-w525k" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.113713 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gjtcl" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.114333 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.114558 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wknkb" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.117640 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v7f6l" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.118015 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.118179 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.118625 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.118784 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.118932 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.119072 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.120251 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-b7psd"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.120990 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-b7psd" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.122530 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tkjwb"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.123158 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tkjwb" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.127512 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-46895"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.128239 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2hqkd"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.128651 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2hqkd" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.129622 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-46895" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.139855 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-klckm"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.140405 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520810-vq6f4"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.148816 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.151839 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.156571 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.160013 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.160640 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.161728 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.162658 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.163566 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-klckm" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.163895 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-nwjmg"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.164467 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6n6fc"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.164906 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b5crl"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.165418 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-6c7g6"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.165911 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-nwjmg" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.179984 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-h57x4"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.180062 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-vq6f4" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.180162 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6n6fc" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.180172 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-6c7g6" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.180269 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b5crl" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.183192 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g8zmc"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.185771 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7f4d6c63-7c73-4fae-8738-04def1b3e5e3-image-import-ca\") pod \"apiserver-76f77b778f-wc6pn\" (UID: \"7f4d6c63-7c73-4fae-8738-04def1b3e5e3\") " pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.185851 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4mg2p\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.185881 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cabaed27-8848-4061-9644-ff60ca94389c-config\") pod \"authentication-operator-69f744f599-7wvg2\" (UID: \"cabaed27-8848-4061-9644-ff60ca94389c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7wvg2" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.185899 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5245eea2-0039-4127-bd35-5d4ab5204b62-audit-policies\") pod \"oauth-openshift-558db77b4-4mg2p\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.185941 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7f4d6c63-7c73-4fae-8738-04def1b3e5e3-audit-dir\") pod \"apiserver-76f77b778f-wc6pn\" (UID: \"7f4d6c63-7c73-4fae-8738-04def1b3e5e3\") " pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.185966 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qslkx\" (UniqueName: \"kubernetes.io/projected/97376a54-e945-445a-b5fe-b2b658705dc5-kube-api-access-qslkx\") pod \"openshift-config-operator-7777fb866f-9ttfl\" (UID: \"97376a54-e945-445a-b5fe-b2b658705dc5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ttfl" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.185989 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f4d6c63-7c73-4fae-8738-04def1b3e5e3-config\") pod \"apiserver-76f77b778f-wc6pn\" (UID: \"7f4d6c63-7c73-4fae-8738-04def1b3e5e3\") " pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.186008 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4mg2p\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.186028 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5be0ecd5-70de-4fa9-abcc-685cef55d530-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-72lrh\" (UID: \"5be0ecd5-70de-4fa9-abcc-685cef55d530\") " pod="openshift-controller-manager/controller-manager-879f6c89f-72lrh" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.186050 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n78zp\" (UniqueName: \"kubernetes.io/projected/c89a7bcd-9899-41bb-81c9-c1dc56f1fd6e-kube-api-access-n78zp\") pod \"cluster-samples-operator-665b6dd947-tnqj2\" (UID: \"c89a7bcd-9899-41bb-81c9-c1dc56f1fd6e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tnqj2" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.186076 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5245eea2-0039-4127-bd35-5d4ab5204b62-audit-dir\") pod \"oauth-openshift-558db77b4-4mg2p\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.186097 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae5e2f5e-3826-45ce-a7af-a2670fcc41ab-config\") pod \"console-operator-58897d9998-nplvk\" (UID: \"ae5e2f5e-3826-45ce-a7af-a2670fcc41ab\") " pod="openshift-console-operator/console-operator-58897d9998-nplvk" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.186113 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4mg2p\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.186136 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4mg2p\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.186158 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97376a54-e945-445a-b5fe-b2b658705dc5-serving-cert\") pod \"openshift-config-operator-7777fb866f-9ttfl\" (UID: \"97376a54-e945-445a-b5fe-b2b658705dc5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ttfl" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.186180 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krtzh\" (UniqueName: \"kubernetes.io/projected/c91e8dc1-671e-4e1c-8e09-f2f8cec9cf34-kube-api-access-krtzh\") pod \"cluster-image-registry-operator-dc59b4c8b-v2s8n\" (UID: \"c91e8dc1-671e-4e1c-8e09-f2f8cec9cf34\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v2s8n" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.186201 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae5e2f5e-3826-45ce-a7af-a2670fcc41ab-serving-cert\") pod \"console-operator-58897d9998-nplvk\" (UID: \"ae5e2f5e-3826-45ce-a7af-a2670fcc41ab\") " pod="openshift-console-operator/console-operator-58897d9998-nplvk" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.186223 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/18c621e3-e734-428a-9bf7-930f8d450c8e-etcd-client\") pod \"etcd-operator-b45778765-plrxx\" (UID: \"18c621e3-e734-428a-9bf7-930f8d450c8e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-plrxx" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.186303 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc99b\" (UniqueName: \"kubernetes.io/projected/c221ee5f-91c7-4ca7-9567-55cd7bd72beb-kube-api-access-jc99b\") pod \"downloads-7954f5f757-sv88f\" (UID: \"c221ee5f-91c7-4ca7-9567-55cd7bd72beb\") " pod="openshift-console/downloads-7954f5f757-sv88f" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.186767 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-h57x4" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.196023 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f368631-7f8d-4004-a36c-38cb52391cb4-serving-cert\") pod \"apiserver-7bbb656c7d-gb9bh\" (UID: \"1f368631-7f8d-4004-a36c-38cb52391cb4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gb9bh" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.196250 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.196471 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.196620 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.197343 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8ngmk"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.197376 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5sss2"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.197392 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kc7dg"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.197813 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-72lrh"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.197912 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kc7dg" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.198141 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g8zmc" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.199795 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-jrx24"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.200020 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.200033 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cabaed27-8848-4061-9644-ff60ca94389c-serving-cert\") pod \"authentication-operator-69f744f599-7wvg2\" (UID: \"cabaed27-8848-4061-9644-ff60ca94389c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7wvg2" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.200242 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18c621e3-e734-428a-9bf7-930f8d450c8e-serving-cert\") pod \"etcd-operator-b45778765-plrxx\" (UID: \"18c621e3-e734-428a-9bf7-930f8d450c8e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-plrxx" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.200282 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4mg2p\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.200312 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9437d039-7efe-4e41-810c-2cf9c324ae08-config\") pod \"kube-apiserver-operator-766d6c64bb-h69cg\" (UID: \"9437d039-7efe-4e41-810c-2cf9c324ae08\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h69cg" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.200337 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cabaed27-8848-4061-9644-ff60ca94389c-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-7wvg2\" (UID: \"cabaed27-8848-4061-9644-ff60ca94389c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7wvg2" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.200359 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4mg2p\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.200382 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lsb5\" (UniqueName: \"kubernetes.io/projected/ae5e2f5e-3826-45ce-a7af-a2670fcc41ab-kube-api-access-2lsb5\") pod \"console-operator-58897d9998-nplvk\" (UID: \"ae5e2f5e-3826-45ce-a7af-a2670fcc41ab\") " pod="openshift-console-operator/console-operator-58897d9998-nplvk" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.200401 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzhd7\" (UniqueName: \"kubernetes.io/projected/5245eea2-0039-4127-bd35-5d4ab5204b62-kube-api-access-mzhd7\") pod \"oauth-openshift-558db77b4-4mg2p\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.200422 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f368631-7f8d-4004-a36c-38cb52391cb4-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gb9bh\" (UID: \"1f368631-7f8d-4004-a36c-38cb52391cb4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gb9bh" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.200438 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4mg2p\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.200477 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7f4d6c63-7c73-4fae-8738-04def1b3e5e3-node-pullsecrets\") pod \"apiserver-76f77b778f-wc6pn\" (UID: \"7f4d6c63-7c73-4fae-8738-04def1b3e5e3\") " pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.200492 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4mg2p\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.200500 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jrx24" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.200508 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4mg2p\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.200564 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9437d039-7efe-4e41-810c-2cf9c324ae08-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-h69cg\" (UID: \"9437d039-7efe-4e41-810c-2cf9c324ae08\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h69cg" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.200599 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c91e8dc1-671e-4e1c-8e09-f2f8cec9cf34-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-v2s8n\" (UID: \"c91e8dc1-671e-4e1c-8e09-f2f8cec9cf34\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v2s8n" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.200643 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bppch\" (UniqueName: \"kubernetes.io/projected/7f4d6c63-7c73-4fae-8738-04def1b3e5e3-kube-api-access-bppch\") pod \"apiserver-76f77b778f-wc6pn\" (UID: \"7f4d6c63-7c73-4fae-8738-04def1b3e5e3\") " pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.200722 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmndv\" (UniqueName: \"kubernetes.io/projected/18c621e3-e734-428a-9bf7-930f8d450c8e-kube-api-access-nmndv\") pod \"etcd-operator-b45778765-plrxx\" (UID: \"18c621e3-e734-428a-9bf7-930f8d450c8e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-plrxx" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.200760 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6a96223-8094-41a8-a311-231ef35ac6b2-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-8mcbf\" (UID: \"a6a96223-8094-41a8-a311-231ef35ac6b2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8mcbf" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.200804 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c91e8dc1-671e-4e1c-8e09-f2f8cec9cf34-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-v2s8n\" (UID: \"c91e8dc1-671e-4e1c-8e09-f2f8cec9cf34\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v2s8n" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.200839 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f4d6c63-7c73-4fae-8738-04def1b3e5e3-serving-cert\") pod \"apiserver-76f77b778f-wc6pn\" (UID: \"7f4d6c63-7c73-4fae-8738-04def1b3e5e3\") " pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.200899 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d8f24d90-54d8-4344-8140-c9fa919b456a-console-oauth-config\") pod \"console-f9d7485db-tpgqc\" (UID: \"d8f24d90-54d8-4344-8140-c9fa919b456a\") " pod="openshift-console/console-f9d7485db-tpgqc" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.200930 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18c621e3-e734-428a-9bf7-930f8d450c8e-config\") pod \"etcd-operator-b45778765-plrxx\" (UID: \"18c621e3-e734-428a-9bf7-930f8d450c8e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-plrxx" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.200964 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1f368631-7f8d-4004-a36c-38cb52391cb4-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gb9bh\" (UID: \"1f368631-7f8d-4004-a36c-38cb52391cb4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gb9bh" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.200991 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f4d6c63-7c73-4fae-8738-04def1b3e5e3-trusted-ca-bundle\") pod \"apiserver-76f77b778f-wc6pn\" (UID: \"7f4d6c63-7c73-4fae-8738-04def1b3e5e3\") " pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.201018 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/18c621e3-e734-428a-9bf7-930f8d450c8e-etcd-ca\") pod \"etcd-operator-b45778765-plrxx\" (UID: \"18c621e3-e734-428a-9bf7-930f8d450c8e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-plrxx" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.201045 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5be0ecd5-70de-4fa9-abcc-685cef55d530-client-ca\") pod \"controller-manager-879f6c89f-72lrh\" (UID: \"5be0ecd5-70de-4fa9-abcc-685cef55d530\") " pod="openshift-controller-manager/controller-manager-879f6c89f-72lrh" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.201069 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct7z2\" (UniqueName: \"kubernetes.io/projected/5be0ecd5-70de-4fa9-abcc-685cef55d530-kube-api-access-ct7z2\") pod \"controller-manager-879f6c89f-72lrh\" (UID: \"5be0ecd5-70de-4fa9-abcc-685cef55d530\") " pod="openshift-controller-manager/controller-manager-879f6c89f-72lrh" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.201102 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7f4d6c63-7c73-4fae-8738-04def1b3e5e3-encryption-config\") pod \"apiserver-76f77b778f-wc6pn\" (UID: \"7f4d6c63-7c73-4fae-8738-04def1b3e5e3\") " pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.201123 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9xcx\" (UniqueName: \"kubernetes.io/projected/cabaed27-8848-4061-9644-ff60ca94389c-kube-api-access-t9xcx\") pod \"authentication-operator-69f744f599-7wvg2\" (UID: \"cabaed27-8848-4061-9644-ff60ca94389c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7wvg2" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.201148 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ae5e2f5e-3826-45ce-a7af-a2670fcc41ab-trusted-ca\") pod \"console-operator-58897d9998-nplvk\" (UID: \"ae5e2f5e-3826-45ce-a7af-a2670fcc41ab\") " pod="openshift-console-operator/console-operator-58897d9998-nplvk" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.201169 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4mg2p\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.201185 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5be0ecd5-70de-4fa9-abcc-685cef55d530-config\") pod \"controller-manager-879f6c89f-72lrh\" (UID: \"5be0ecd5-70de-4fa9-abcc-685cef55d530\") " pod="openshift-controller-manager/controller-manager-879f6c89f-72lrh" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.201202 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7f4d6c63-7c73-4fae-8738-04def1b3e5e3-etcd-client\") pod \"apiserver-76f77b778f-wc6pn\" (UID: \"7f4d6c63-7c73-4fae-8738-04def1b3e5e3\") " pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.201218 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cabaed27-8848-4061-9644-ff60ca94389c-service-ca-bundle\") pod \"authentication-operator-69f744f599-7wvg2\" (UID: \"cabaed27-8848-4061-9644-ff60ca94389c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7wvg2" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.201231 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d8f24d90-54d8-4344-8140-c9fa919b456a-console-serving-cert\") pod \"console-f9d7485db-tpgqc\" (UID: \"d8f24d90-54d8-4344-8140-c9fa919b456a\") " pod="openshift-console/console-f9d7485db-tpgqc" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.201245 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6a96223-8094-41a8-a311-231ef35ac6b2-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-8mcbf\" (UID: \"a6a96223-8094-41a8-a311-231ef35ac6b2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8mcbf" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.201258 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1f368631-7f8d-4004-a36c-38cb52391cb4-audit-policies\") pod \"apiserver-7bbb656c7d-gb9bh\" (UID: \"1f368631-7f8d-4004-a36c-38cb52391cb4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gb9bh" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.201273 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7f4d6c63-7c73-4fae-8738-04def1b3e5e3-etcd-serving-ca\") pod \"apiserver-76f77b778f-wc6pn\" (UID: \"7f4d6c63-7c73-4fae-8738-04def1b3e5e3\") " pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.201314 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/97376a54-e945-445a-b5fe-b2b658705dc5-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9ttfl\" (UID: \"97376a54-e945-445a-b5fe-b2b658705dc5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ttfl" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.201335 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c89a7bcd-9899-41bb-81c9-c1dc56f1fd6e-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tnqj2\" (UID: \"c89a7bcd-9899-41bb-81c9-c1dc56f1fd6e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tnqj2" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.201352 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdcp5\" (UniqueName: \"kubernetes.io/projected/d8f24d90-54d8-4344-8140-c9fa919b456a-kube-api-access-bdcp5\") pod \"console-f9d7485db-tpgqc\" (UID: \"d8f24d90-54d8-4344-8140-c9fa919b456a\") " pod="openshift-console/console-f9d7485db-tpgqc" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.201385 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7f4d6c63-7c73-4fae-8738-04def1b3e5e3-audit\") pod \"apiserver-76f77b778f-wc6pn\" (UID: \"7f4d6c63-7c73-4fae-8738-04def1b3e5e3\") " pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.201413 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blvbj\" (UniqueName: \"kubernetes.io/projected/1f368631-7f8d-4004-a36c-38cb52391cb4-kube-api-access-blvbj\") pod \"apiserver-7bbb656c7d-gb9bh\" (UID: \"1f368631-7f8d-4004-a36c-38cb52391cb4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gb9bh" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.201432 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-wc6pn"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.201467 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1f368631-7f8d-4004-a36c-38cb52391cb4-audit-dir\") pod \"apiserver-7bbb656c7d-gb9bh\" (UID: \"1f368631-7f8d-4004-a36c-38cb52391cb4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gb9bh" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.201495 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d8f24d90-54d8-4344-8140-c9fa919b456a-console-config\") pod \"console-f9d7485db-tpgqc\" (UID: \"d8f24d90-54d8-4344-8140-c9fa919b456a\") " pod="openshift-console/console-f9d7485db-tpgqc" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.201504 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.201517 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8f24d90-54d8-4344-8140-c9fa919b456a-trusted-ca-bundle\") pod \"console-f9d7485db-tpgqc\" (UID: \"d8f24d90-54d8-4344-8140-c9fa919b456a\") " pod="openshift-console/console-f9d7485db-tpgqc" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.201535 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq29c\" (UniqueName: \"kubernetes.io/projected/a6a96223-8094-41a8-a311-231ef35ac6b2-kube-api-access-zq29c\") pod \"openshift-controller-manager-operator-756b6f6bc6-8mcbf\" (UID: \"a6a96223-8094-41a8-a311-231ef35ac6b2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8mcbf" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.201551 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d8f24d90-54d8-4344-8140-c9fa919b456a-service-ca\") pod \"console-f9d7485db-tpgqc\" (UID: \"d8f24d90-54d8-4344-8140-c9fa919b456a\") " pod="openshift-console/console-f9d7485db-tpgqc" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.201568 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d8f24d90-54d8-4344-8140-c9fa919b456a-oauth-serving-cert\") pod \"console-f9d7485db-tpgqc\" (UID: \"d8f24d90-54d8-4344-8140-c9fa919b456a\") " pod="openshift-console/console-f9d7485db-tpgqc" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.201587 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5be0ecd5-70de-4fa9-abcc-685cef55d530-serving-cert\") pod \"controller-manager-879f6c89f-72lrh\" (UID: \"5be0ecd5-70de-4fa9-abcc-685cef55d530\") " pod="openshift-controller-manager/controller-manager-879f6c89f-72lrh" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.201604 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c91e8dc1-671e-4e1c-8e09-f2f8cec9cf34-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-v2s8n\" (UID: \"c91e8dc1-671e-4e1c-8e09-f2f8cec9cf34\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v2s8n" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.201621 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1f368631-7f8d-4004-a36c-38cb52391cb4-encryption-config\") pod \"apiserver-7bbb656c7d-gb9bh\" (UID: \"1f368631-7f8d-4004-a36c-38cb52391cb4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gb9bh" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.201649 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/18c621e3-e734-428a-9bf7-930f8d450c8e-etcd-service-ca\") pod \"etcd-operator-b45778765-plrxx\" (UID: \"18c621e3-e734-428a-9bf7-930f8d450c8e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-plrxx" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.201680 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4mg2p\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.201710 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1f368631-7f8d-4004-a36c-38cb52391cb4-etcd-client\") pod \"apiserver-7bbb656c7d-gb9bh\" (UID: \"1f368631-7f8d-4004-a36c-38cb52391cb4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gb9bh" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.201726 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9437d039-7efe-4e41-810c-2cf9c324ae08-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-h69cg\" (UID: \"9437d039-7efe-4e41-810c-2cf9c324ae08\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h69cg" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.202534 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.204817 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tnqj2"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.206945 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h69cg"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.208125 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v2s8n"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.209243 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-7wvg2"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.211218 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-tpgqc"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.212231 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-nplvk"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.213296 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-sv88f"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.215567 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-w525k"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.215608 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gb9bh"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.216514 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4mg2p"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.217290 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9ttfl"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.218900 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2f89v"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.221259 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tkjwb"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.221287 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-nwjmg"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.222594 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520810-vq6f4"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.222671 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-pxwbj"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.223419 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-pxwbj" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.226068 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dqt5h"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.226958 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-gjtcl"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.227042 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-dqt5h" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.227257 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wknkb"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.234540 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j84fb"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.234731 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.237396 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-klckm"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.241288 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2hqkd"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.241328 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-6c7g6"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.243017 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.243186 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8mcbf"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.243804 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b5crl"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.245270 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6n6fc"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.246796 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-jrx24"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.247810 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v7f6l"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.249841 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-46895"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.250419 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-ll6fj"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.251882 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dqt5h"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.253061 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-h57x4"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.254389 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kc7dg"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.255715 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-plrxx"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.256947 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g8zmc"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.258132 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-pxwbj"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.259383 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-qfjbm"] Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.260190 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-qfjbm" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.263711 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.287552 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.309969 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cabaed27-8848-4061-9644-ff60ca94389c-config\") pod \"authentication-operator-69f744f599-7wvg2\" (UID: \"cabaed27-8848-4061-9644-ff60ca94389c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7wvg2" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.310035 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5245eea2-0039-4127-bd35-5d4ab5204b62-audit-policies\") pod \"oauth-openshift-558db77b4-4mg2p\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.310078 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f4d6c63-7c73-4fae-8738-04def1b3e5e3-config\") pod \"apiserver-76f77b778f-wc6pn\" (UID: \"7f4d6c63-7c73-4fae-8738-04def1b3e5e3\") " pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.310100 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7f4d6c63-7c73-4fae-8738-04def1b3e5e3-audit-dir\") pod \"apiserver-76f77b778f-wc6pn\" (UID: \"7f4d6c63-7c73-4fae-8738-04def1b3e5e3\") " pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.310120 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qslkx\" (UniqueName: \"kubernetes.io/projected/97376a54-e945-445a-b5fe-b2b658705dc5-kube-api-access-qslkx\") pod \"openshift-config-operator-7777fb866f-9ttfl\" (UID: \"97376a54-e945-445a-b5fe-b2b658705dc5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ttfl" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.310166 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5245eea2-0039-4127-bd35-5d4ab5204b62-audit-dir\") pod \"oauth-openshift-558db77b4-4mg2p\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.310191 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4mg2p\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.310234 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5be0ecd5-70de-4fa9-abcc-685cef55d530-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-72lrh\" (UID: \"5be0ecd5-70de-4fa9-abcc-685cef55d530\") " pod="openshift-controller-manager/controller-manager-879f6c89f-72lrh" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.310258 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n78zp\" (UniqueName: \"kubernetes.io/projected/c89a7bcd-9899-41bb-81c9-c1dc56f1fd6e-kube-api-access-n78zp\") pod \"cluster-samples-operator-665b6dd947-tnqj2\" (UID: \"c89a7bcd-9899-41bb-81c9-c1dc56f1fd6e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tnqj2" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.310280 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4mg2p\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.310324 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae5e2f5e-3826-45ce-a7af-a2670fcc41ab-config\") pod \"console-operator-58897d9998-nplvk\" (UID: \"ae5e2f5e-3826-45ce-a7af-a2670fcc41ab\") " pod="openshift-console-operator/console-operator-58897d9998-nplvk" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.310348 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krtzh\" (UniqueName: \"kubernetes.io/projected/c91e8dc1-671e-4e1c-8e09-f2f8cec9cf34-kube-api-access-krtzh\") pod \"cluster-image-registry-operator-dc59b4c8b-v2s8n\" (UID: \"c91e8dc1-671e-4e1c-8e09-f2f8cec9cf34\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v2s8n" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.310385 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4mg2p\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.310412 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97376a54-e945-445a-b5fe-b2b658705dc5-serving-cert\") pod \"openshift-config-operator-7777fb866f-9ttfl\" (UID: \"97376a54-e945-445a-b5fe-b2b658705dc5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ttfl" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.310434 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc99b\" (UniqueName: \"kubernetes.io/projected/c221ee5f-91c7-4ca7-9567-55cd7bd72beb-kube-api-access-jc99b\") pod \"downloads-7954f5f757-sv88f\" (UID: \"c221ee5f-91c7-4ca7-9567-55cd7bd72beb\") " pod="openshift-console/downloads-7954f5f757-sv88f" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.310480 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae5e2f5e-3826-45ce-a7af-a2670fcc41ab-serving-cert\") pod \"console-operator-58897d9998-nplvk\" (UID: \"ae5e2f5e-3826-45ce-a7af-a2670fcc41ab\") " pod="openshift-console-operator/console-operator-58897d9998-nplvk" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.310501 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/18c621e3-e734-428a-9bf7-930f8d450c8e-etcd-client\") pod \"etcd-operator-b45778765-plrxx\" (UID: \"18c621e3-e734-428a-9bf7-930f8d450c8e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-plrxx" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.310524 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4mg2p\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.310557 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f368631-7f8d-4004-a36c-38cb52391cb4-serving-cert\") pod \"apiserver-7bbb656c7d-gb9bh\" (UID: \"1f368631-7f8d-4004-a36c-38cb52391cb4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gb9bh" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.310578 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cabaed27-8848-4061-9644-ff60ca94389c-serving-cert\") pod \"authentication-operator-69f744f599-7wvg2\" (UID: \"cabaed27-8848-4061-9644-ff60ca94389c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7wvg2" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.310609 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18c621e3-e734-428a-9bf7-930f8d450c8e-serving-cert\") pod \"etcd-operator-b45778765-plrxx\" (UID: \"18c621e3-e734-428a-9bf7-930f8d450c8e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-plrxx" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.310628 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9437d039-7efe-4e41-810c-2cf9c324ae08-config\") pod \"kube-apiserver-operator-766d6c64bb-h69cg\" (UID: \"9437d039-7efe-4e41-810c-2cf9c324ae08\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h69cg" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.310651 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cabaed27-8848-4061-9644-ff60ca94389c-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-7wvg2\" (UID: \"cabaed27-8848-4061-9644-ff60ca94389c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7wvg2" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.310677 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4mg2p\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.310699 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lsb5\" (UniqueName: \"kubernetes.io/projected/ae5e2f5e-3826-45ce-a7af-a2670fcc41ab-kube-api-access-2lsb5\") pod \"console-operator-58897d9998-nplvk\" (UID: \"ae5e2f5e-3826-45ce-a7af-a2670fcc41ab\") " pod="openshift-console-operator/console-operator-58897d9998-nplvk" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.310721 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzhd7\" (UniqueName: \"kubernetes.io/projected/5245eea2-0039-4127-bd35-5d4ab5204b62-kube-api-access-mzhd7\") pod \"oauth-openshift-558db77b4-4mg2p\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.310751 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f368631-7f8d-4004-a36c-38cb52391cb4-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gb9bh\" (UID: \"1f368631-7f8d-4004-a36c-38cb52391cb4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gb9bh" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.310778 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4mg2p\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.310809 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4mg2p\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.310835 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7f4d6c63-7c73-4fae-8738-04def1b3e5e3-node-pullsecrets\") pod \"apiserver-76f77b778f-wc6pn\" (UID: \"7f4d6c63-7c73-4fae-8738-04def1b3e5e3\") " pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.310855 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4mg2p\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.310893 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9437d039-7efe-4e41-810c-2cf9c324ae08-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-h69cg\" (UID: \"9437d039-7efe-4e41-810c-2cf9c324ae08\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h69cg" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.310911 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c91e8dc1-671e-4e1c-8e09-f2f8cec9cf34-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-v2s8n\" (UID: \"c91e8dc1-671e-4e1c-8e09-f2f8cec9cf34\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v2s8n" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.310951 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bppch\" (UniqueName: \"kubernetes.io/projected/7f4d6c63-7c73-4fae-8738-04def1b3e5e3-kube-api-access-bppch\") pod \"apiserver-76f77b778f-wc6pn\" (UID: \"7f4d6c63-7c73-4fae-8738-04def1b3e5e3\") " pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.310985 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmndv\" (UniqueName: \"kubernetes.io/projected/18c621e3-e734-428a-9bf7-930f8d450c8e-kube-api-access-nmndv\") pod \"etcd-operator-b45778765-plrxx\" (UID: \"18c621e3-e734-428a-9bf7-930f8d450c8e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-plrxx" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.311006 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6a96223-8094-41a8-a311-231ef35ac6b2-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-8mcbf\" (UID: \"a6a96223-8094-41a8-a311-231ef35ac6b2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8mcbf" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.311028 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c91e8dc1-671e-4e1c-8e09-f2f8cec9cf34-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-v2s8n\" (UID: \"c91e8dc1-671e-4e1c-8e09-f2f8cec9cf34\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v2s8n" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.311052 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f4d6c63-7c73-4fae-8738-04def1b3e5e3-serving-cert\") pod \"apiserver-76f77b778f-wc6pn\" (UID: \"7f4d6c63-7c73-4fae-8738-04def1b3e5e3\") " pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.311076 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d8f24d90-54d8-4344-8140-c9fa919b456a-console-oauth-config\") pod \"console-f9d7485db-tpgqc\" (UID: \"d8f24d90-54d8-4344-8140-c9fa919b456a\") " pod="openshift-console/console-f9d7485db-tpgqc" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.311097 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18c621e3-e734-428a-9bf7-930f8d450c8e-config\") pod \"etcd-operator-b45778765-plrxx\" (UID: \"18c621e3-e734-428a-9bf7-930f8d450c8e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-plrxx" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.311120 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1f368631-7f8d-4004-a36c-38cb52391cb4-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gb9bh\" (UID: \"1f368631-7f8d-4004-a36c-38cb52391cb4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gb9bh" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.311141 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f4d6c63-7c73-4fae-8738-04def1b3e5e3-trusted-ca-bundle\") pod \"apiserver-76f77b778f-wc6pn\" (UID: \"7f4d6c63-7c73-4fae-8738-04def1b3e5e3\") " pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.311159 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/18c621e3-e734-428a-9bf7-930f8d450c8e-etcd-ca\") pod \"etcd-operator-b45778765-plrxx\" (UID: \"18c621e3-e734-428a-9bf7-930f8d450c8e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-plrxx" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.311181 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ae5e2f5e-3826-45ce-a7af-a2670fcc41ab-trusted-ca\") pod \"console-operator-58897d9998-nplvk\" (UID: \"ae5e2f5e-3826-45ce-a7af-a2670fcc41ab\") " pod="openshift-console-operator/console-operator-58897d9998-nplvk" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.311203 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5be0ecd5-70de-4fa9-abcc-685cef55d530-client-ca\") pod \"controller-manager-879f6c89f-72lrh\" (UID: \"5be0ecd5-70de-4fa9-abcc-685cef55d530\") " pod="openshift-controller-manager/controller-manager-879f6c89f-72lrh" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.311226 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct7z2\" (UniqueName: \"kubernetes.io/projected/5be0ecd5-70de-4fa9-abcc-685cef55d530-kube-api-access-ct7z2\") pod \"controller-manager-879f6c89f-72lrh\" (UID: \"5be0ecd5-70de-4fa9-abcc-685cef55d530\") " pod="openshift-controller-manager/controller-manager-879f6c89f-72lrh" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.311258 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7f4d6c63-7c73-4fae-8738-04def1b3e5e3-encryption-config\") pod \"apiserver-76f77b778f-wc6pn\" (UID: \"7f4d6c63-7c73-4fae-8738-04def1b3e5e3\") " pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.311285 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9xcx\" (UniqueName: \"kubernetes.io/projected/cabaed27-8848-4061-9644-ff60ca94389c-kube-api-access-t9xcx\") pod \"authentication-operator-69f744f599-7wvg2\" (UID: \"cabaed27-8848-4061-9644-ff60ca94389c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7wvg2" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.311307 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6a96223-8094-41a8-a311-231ef35ac6b2-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-8mcbf\" (UID: \"a6a96223-8094-41a8-a311-231ef35ac6b2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8mcbf" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.311326 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4mg2p\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.311349 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5be0ecd5-70de-4fa9-abcc-685cef55d530-config\") pod \"controller-manager-879f6c89f-72lrh\" (UID: \"5be0ecd5-70de-4fa9-abcc-685cef55d530\") " pod="openshift-controller-manager/controller-manager-879f6c89f-72lrh" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.311373 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7f4d6c63-7c73-4fae-8738-04def1b3e5e3-etcd-client\") pod \"apiserver-76f77b778f-wc6pn\" (UID: \"7f4d6c63-7c73-4fae-8738-04def1b3e5e3\") " pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.311395 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cabaed27-8848-4061-9644-ff60ca94389c-service-ca-bundle\") pod \"authentication-operator-69f744f599-7wvg2\" (UID: \"cabaed27-8848-4061-9644-ff60ca94389c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7wvg2" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.311415 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d8f24d90-54d8-4344-8140-c9fa919b456a-console-serving-cert\") pod \"console-f9d7485db-tpgqc\" (UID: \"d8f24d90-54d8-4344-8140-c9fa919b456a\") " pod="openshift-console/console-f9d7485db-tpgqc" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.311437 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1f368631-7f8d-4004-a36c-38cb52391cb4-audit-policies\") pod \"apiserver-7bbb656c7d-gb9bh\" (UID: \"1f368631-7f8d-4004-a36c-38cb52391cb4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gb9bh" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.311474 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7f4d6c63-7c73-4fae-8738-04def1b3e5e3-etcd-serving-ca\") pod \"apiserver-76f77b778f-wc6pn\" (UID: \"7f4d6c63-7c73-4fae-8738-04def1b3e5e3\") " pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.311517 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/97376a54-e945-445a-b5fe-b2b658705dc5-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9ttfl\" (UID: \"97376a54-e945-445a-b5fe-b2b658705dc5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ttfl" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.311551 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c89a7bcd-9899-41bb-81c9-c1dc56f1fd6e-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tnqj2\" (UID: \"c89a7bcd-9899-41bb-81c9-c1dc56f1fd6e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tnqj2" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.311579 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7f4d6c63-7c73-4fae-8738-04def1b3e5e3-audit\") pod \"apiserver-76f77b778f-wc6pn\" (UID: \"7f4d6c63-7c73-4fae-8738-04def1b3e5e3\") " pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.311612 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdcp5\" (UniqueName: \"kubernetes.io/projected/d8f24d90-54d8-4344-8140-c9fa919b456a-kube-api-access-bdcp5\") pod \"console-f9d7485db-tpgqc\" (UID: \"d8f24d90-54d8-4344-8140-c9fa919b456a\") " pod="openshift-console/console-f9d7485db-tpgqc" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.311652 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blvbj\" (UniqueName: \"kubernetes.io/projected/1f368631-7f8d-4004-a36c-38cb52391cb4-kube-api-access-blvbj\") pod \"apiserver-7bbb656c7d-gb9bh\" (UID: \"1f368631-7f8d-4004-a36c-38cb52391cb4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gb9bh" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.311699 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1f368631-7f8d-4004-a36c-38cb52391cb4-audit-dir\") pod \"apiserver-7bbb656c7d-gb9bh\" (UID: \"1f368631-7f8d-4004-a36c-38cb52391cb4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gb9bh" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.311722 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d8f24d90-54d8-4344-8140-c9fa919b456a-console-config\") pod \"console-f9d7485db-tpgqc\" (UID: \"d8f24d90-54d8-4344-8140-c9fa919b456a\") " pod="openshift-console/console-f9d7485db-tpgqc" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.311746 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8f24d90-54d8-4344-8140-c9fa919b456a-trusted-ca-bundle\") pod \"console-f9d7485db-tpgqc\" (UID: \"d8f24d90-54d8-4344-8140-c9fa919b456a\") " pod="openshift-console/console-f9d7485db-tpgqc" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.311783 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq29c\" (UniqueName: \"kubernetes.io/projected/a6a96223-8094-41a8-a311-231ef35ac6b2-kube-api-access-zq29c\") pod \"openshift-controller-manager-operator-756b6f6bc6-8mcbf\" (UID: \"a6a96223-8094-41a8-a311-231ef35ac6b2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8mcbf" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.311813 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d8f24d90-54d8-4344-8140-c9fa919b456a-service-ca\") pod \"console-f9d7485db-tpgqc\" (UID: \"d8f24d90-54d8-4344-8140-c9fa919b456a\") " pod="openshift-console/console-f9d7485db-tpgqc" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.312343 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f4d6c63-7c73-4fae-8738-04def1b3e5e3-config\") pod \"apiserver-76f77b778f-wc6pn\" (UID: \"7f4d6c63-7c73-4fae-8738-04def1b3e5e3\") " pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.312784 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d8f24d90-54d8-4344-8140-c9fa919b456a-oauth-serving-cert\") pod \"console-f9d7485db-tpgqc\" (UID: \"d8f24d90-54d8-4344-8140-c9fa919b456a\") " pod="openshift-console/console-f9d7485db-tpgqc" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.312940 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.315839 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4mg2p\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.316365 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4mg2p\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.316503 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97376a54-e945-445a-b5fe-b2b658705dc5-serving-cert\") pod \"openshift-config-operator-7777fb866f-9ttfl\" (UID: \"97376a54-e945-445a-b5fe-b2b658705dc5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ttfl" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.316586 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cabaed27-8848-4061-9644-ff60ca94389c-config\") pod \"authentication-operator-69f744f599-7wvg2\" (UID: \"cabaed27-8848-4061-9644-ff60ca94389c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7wvg2" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.316601 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7f4d6c63-7c73-4fae-8738-04def1b3e5e3-audit-dir\") pod \"apiserver-76f77b778f-wc6pn\" (UID: \"7f4d6c63-7c73-4fae-8738-04def1b3e5e3\") " pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.316609 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f4d6c63-7c73-4fae-8738-04def1b3e5e3-serving-cert\") pod \"apiserver-76f77b778f-wc6pn\" (UID: \"7f4d6c63-7c73-4fae-8738-04def1b3e5e3\") " pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.316654 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5245eea2-0039-4127-bd35-5d4ab5204b62-audit-dir\") pod \"oauth-openshift-558db77b4-4mg2p\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.316698 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae5e2f5e-3826-45ce-a7af-a2670fcc41ab-serving-cert\") pod \"console-operator-58897d9998-nplvk\" (UID: \"ae5e2f5e-3826-45ce-a7af-a2670fcc41ab\") " pod="openshift-console-operator/console-operator-58897d9998-nplvk" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.316710 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5be0ecd5-70de-4fa9-abcc-685cef55d530-serving-cert\") pod \"controller-manager-879f6c89f-72lrh\" (UID: \"5be0ecd5-70de-4fa9-abcc-685cef55d530\") " pod="openshift-controller-manager/controller-manager-879f6c89f-72lrh" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.316754 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c91e8dc1-671e-4e1c-8e09-f2f8cec9cf34-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-v2s8n\" (UID: \"c91e8dc1-671e-4e1c-8e09-f2f8cec9cf34\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v2s8n" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.316787 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1f368631-7f8d-4004-a36c-38cb52391cb4-encryption-config\") pod \"apiserver-7bbb656c7d-gb9bh\" (UID: \"1f368631-7f8d-4004-a36c-38cb52391cb4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gb9bh" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.316816 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/18c621e3-e734-428a-9bf7-930f8d450c8e-etcd-service-ca\") pod \"etcd-operator-b45778765-plrxx\" (UID: \"18c621e3-e734-428a-9bf7-930f8d450c8e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-plrxx" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.316850 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4mg2p\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.316884 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1f368631-7f8d-4004-a36c-38cb52391cb4-etcd-client\") pod \"apiserver-7bbb656c7d-gb9bh\" (UID: \"1f368631-7f8d-4004-a36c-38cb52391cb4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gb9bh" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.316912 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9437d039-7efe-4e41-810c-2cf9c324ae08-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-h69cg\" (UID: \"9437d039-7efe-4e41-810c-2cf9c324ae08\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h69cg" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.316937 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7f4d6c63-7c73-4fae-8738-04def1b3e5e3-image-import-ca\") pod \"apiserver-76f77b778f-wc6pn\" (UID: \"7f4d6c63-7c73-4fae-8738-04def1b3e5e3\") " pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.316973 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4mg2p\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.317148 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5245eea2-0039-4127-bd35-5d4ab5204b62-audit-policies\") pod \"oauth-openshift-558db77b4-4mg2p\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.317845 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f4d6c63-7c73-4fae-8738-04def1b3e5e3-trusted-ca-bundle\") pod \"apiserver-76f77b778f-wc6pn\" (UID: \"7f4d6c63-7c73-4fae-8738-04def1b3e5e3\") " pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.317947 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8f24d90-54d8-4344-8140-c9fa919b456a-trusted-ca-bundle\") pod \"console-f9d7485db-tpgqc\" (UID: \"d8f24d90-54d8-4344-8140-c9fa919b456a\") " pod="openshift-console/console-f9d7485db-tpgqc" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.317964 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1f368631-7f8d-4004-a36c-38cb52391cb4-audit-policies\") pod \"apiserver-7bbb656c7d-gb9bh\" (UID: \"1f368631-7f8d-4004-a36c-38cb52391cb4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gb9bh" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.318428 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cabaed27-8848-4061-9644-ff60ca94389c-service-ca-bundle\") pod \"authentication-operator-69f744f599-7wvg2\" (UID: \"cabaed27-8848-4061-9644-ff60ca94389c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7wvg2" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.318485 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f368631-7f8d-4004-a36c-38cb52391cb4-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gb9bh\" (UID: \"1f368631-7f8d-4004-a36c-38cb52391cb4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gb9bh" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.319004 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4mg2p\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.319289 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ae5e2f5e-3826-45ce-a7af-a2670fcc41ab-trusted-ca\") pod \"console-operator-58897d9998-nplvk\" (UID: \"ae5e2f5e-3826-45ce-a7af-a2670fcc41ab\") " pod="openshift-console-operator/console-operator-58897d9998-nplvk" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.319494 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1f368631-7f8d-4004-a36c-38cb52391cb4-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gb9bh\" (UID: \"1f368631-7f8d-4004-a36c-38cb52391cb4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gb9bh" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.319514 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7f4d6c63-7c73-4fae-8738-04def1b3e5e3-audit\") pod \"apiserver-76f77b778f-wc6pn\" (UID: \"7f4d6c63-7c73-4fae-8738-04def1b3e5e3\") " pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.319951 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5be0ecd5-70de-4fa9-abcc-685cef55d530-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-72lrh\" (UID: \"5be0ecd5-70de-4fa9-abcc-685cef55d530\") " pod="openshift-controller-manager/controller-manager-879f6c89f-72lrh" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.320058 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae5e2f5e-3826-45ce-a7af-a2670fcc41ab-config\") pod \"console-operator-58897d9998-nplvk\" (UID: \"ae5e2f5e-3826-45ce-a7af-a2670fcc41ab\") " pod="openshift-console-operator/console-operator-58897d9998-nplvk" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.320226 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4mg2p\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.320229 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4mg2p\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.321104 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d8f24d90-54d8-4344-8140-c9fa919b456a-oauth-serving-cert\") pod \"console-f9d7485db-tpgqc\" (UID: \"d8f24d90-54d8-4344-8140-c9fa919b456a\") " pod="openshift-console/console-f9d7485db-tpgqc" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.321234 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7f4d6c63-7c73-4fae-8738-04def1b3e5e3-node-pullsecrets\") pod \"apiserver-76f77b778f-wc6pn\" (UID: \"7f4d6c63-7c73-4fae-8738-04def1b3e5e3\") " pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.321510 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6a96223-8094-41a8-a311-231ef35ac6b2-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-8mcbf\" (UID: \"a6a96223-8094-41a8-a311-231ef35ac6b2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8mcbf" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.321523 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d8f24d90-54d8-4344-8140-c9fa919b456a-service-ca\") pod \"console-f9d7485db-tpgqc\" (UID: \"d8f24d90-54d8-4344-8140-c9fa919b456a\") " pod="openshift-console/console-f9d7485db-tpgqc" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.321557 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5be0ecd5-70de-4fa9-abcc-685cef55d530-client-ca\") pod \"controller-manager-879f6c89f-72lrh\" (UID: \"5be0ecd5-70de-4fa9-abcc-685cef55d530\") " pod="openshift-controller-manager/controller-manager-879f6c89f-72lrh" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.321642 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1f368631-7f8d-4004-a36c-38cb52391cb4-audit-dir\") pod \"apiserver-7bbb656c7d-gb9bh\" (UID: \"1f368631-7f8d-4004-a36c-38cb52391cb4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gb9bh" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.322170 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7f4d6c63-7c73-4fae-8738-04def1b3e5e3-etcd-serving-ca\") pod \"apiserver-76f77b778f-wc6pn\" (UID: \"7f4d6c63-7c73-4fae-8738-04def1b3e5e3\") " pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.322236 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6a96223-8094-41a8-a311-231ef35ac6b2-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-8mcbf\" (UID: \"a6a96223-8094-41a8-a311-231ef35ac6b2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8mcbf" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.322278 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f368631-7f8d-4004-a36c-38cb52391cb4-serving-cert\") pod \"apiserver-7bbb656c7d-gb9bh\" (UID: \"1f368631-7f8d-4004-a36c-38cb52391cb4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gb9bh" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.323748 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cabaed27-8848-4061-9644-ff60ca94389c-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-7wvg2\" (UID: \"cabaed27-8848-4061-9644-ff60ca94389c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7wvg2" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.323918 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4mg2p\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.324207 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4mg2p\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.324380 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cabaed27-8848-4061-9644-ff60ca94389c-serving-cert\") pod \"authentication-operator-69f744f599-7wvg2\" (UID: \"cabaed27-8848-4061-9644-ff60ca94389c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7wvg2" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.324706 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/18c621e3-e734-428a-9bf7-930f8d450c8e-etcd-service-ca\") pod \"etcd-operator-b45778765-plrxx\" (UID: \"18c621e3-e734-428a-9bf7-930f8d450c8e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-plrxx" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.324757 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5be0ecd5-70de-4fa9-abcc-685cef55d530-config\") pod \"controller-manager-879f6c89f-72lrh\" (UID: \"5be0ecd5-70de-4fa9-abcc-685cef55d530\") " pod="openshift-controller-manager/controller-manager-879f6c89f-72lrh" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.325120 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7f4d6c63-7c73-4fae-8738-04def1b3e5e3-encryption-config\") pod \"apiserver-76f77b778f-wc6pn\" (UID: \"7f4d6c63-7c73-4fae-8738-04def1b3e5e3\") " pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.325525 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4mg2p\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.325717 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.326396 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4mg2p\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.326649 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/97376a54-e945-445a-b5fe-b2b658705dc5-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9ttfl\" (UID: \"97376a54-e945-445a-b5fe-b2b658705dc5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ttfl" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.326747 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d8f24d90-54d8-4344-8140-c9fa919b456a-console-config\") pod \"console-f9d7485db-tpgqc\" (UID: \"d8f24d90-54d8-4344-8140-c9fa919b456a\") " pod="openshift-console/console-f9d7485db-tpgqc" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.327346 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18c621e3-e734-428a-9bf7-930f8d450c8e-config\") pod \"etcd-operator-b45778765-plrxx\" (UID: \"18c621e3-e734-428a-9bf7-930f8d450c8e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-plrxx" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.327624 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d8f24d90-54d8-4344-8140-c9fa919b456a-console-serving-cert\") pod \"console-f9d7485db-tpgqc\" (UID: \"d8f24d90-54d8-4344-8140-c9fa919b456a\") " pod="openshift-console/console-f9d7485db-tpgqc" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.327966 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c89a7bcd-9899-41bb-81c9-c1dc56f1fd6e-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tnqj2\" (UID: \"c89a7bcd-9899-41bb-81c9-c1dc56f1fd6e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tnqj2" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.328149 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c91e8dc1-671e-4e1c-8e09-f2f8cec9cf34-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-v2s8n\" (UID: \"c91e8dc1-671e-4e1c-8e09-f2f8cec9cf34\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v2s8n" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.328157 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/18c621e3-e734-428a-9bf7-930f8d450c8e-etcd-ca\") pod \"etcd-operator-b45778765-plrxx\" (UID: \"18c621e3-e734-428a-9bf7-930f8d450c8e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-plrxx" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.328418 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4mg2p\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.328482 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1f368631-7f8d-4004-a36c-38cb52391cb4-encryption-config\") pod \"apiserver-7bbb656c7d-gb9bh\" (UID: \"1f368631-7f8d-4004-a36c-38cb52391cb4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gb9bh" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.328807 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4mg2p\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.329177 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/18c621e3-e734-428a-9bf7-930f8d450c8e-etcd-client\") pod \"etcd-operator-b45778765-plrxx\" (UID: \"18c621e3-e734-428a-9bf7-930f8d450c8e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-plrxx" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.330147 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1f368631-7f8d-4004-a36c-38cb52391cb4-etcd-client\") pod \"apiserver-7bbb656c7d-gb9bh\" (UID: \"1f368631-7f8d-4004-a36c-38cb52391cb4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gb9bh" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.330401 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7f4d6c63-7c73-4fae-8738-04def1b3e5e3-image-import-ca\") pod \"apiserver-76f77b778f-wc6pn\" (UID: \"7f4d6c63-7c73-4fae-8738-04def1b3e5e3\") " pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.330403 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d8f24d90-54d8-4344-8140-c9fa919b456a-console-oauth-config\") pod \"console-f9d7485db-tpgqc\" (UID: \"d8f24d90-54d8-4344-8140-c9fa919b456a\") " pod="openshift-console/console-f9d7485db-tpgqc" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.330773 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18c621e3-e734-428a-9bf7-930f8d450c8e-serving-cert\") pod \"etcd-operator-b45778765-plrxx\" (UID: \"18c621e3-e734-428a-9bf7-930f8d450c8e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-plrxx" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.331830 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7f4d6c63-7c73-4fae-8738-04def1b3e5e3-etcd-client\") pod \"apiserver-76f77b778f-wc6pn\" (UID: \"7f4d6c63-7c73-4fae-8738-04def1b3e5e3\") " pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.332131 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5be0ecd5-70de-4fa9-abcc-685cef55d530-serving-cert\") pod \"controller-manager-879f6c89f-72lrh\" (UID: \"5be0ecd5-70de-4fa9-abcc-685cef55d530\") " pod="openshift-controller-manager/controller-manager-879f6c89f-72lrh" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.349856 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.354069 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c91e8dc1-671e-4e1c-8e09-f2f8cec9cf34-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-v2s8n\" (UID: \"c91e8dc1-671e-4e1c-8e09-f2f8cec9cf34\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v2s8n" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.363669 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.383695 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.395050 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9437d039-7efe-4e41-810c-2cf9c324ae08-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-h69cg\" (UID: \"9437d039-7efe-4e41-810c-2cf9c324ae08\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h69cg" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.404242 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.423514 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.427768 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9437d039-7efe-4e41-810c-2cf9c324ae08-config\") pod \"kube-apiserver-operator-766d6c64bb-h69cg\" (UID: \"9437d039-7efe-4e41-810c-2cf9c324ae08\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h69cg" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.483597 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.503222 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.530266 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.543688 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.563426 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.583837 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.604278 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.623053 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.643475 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.662969 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.683092 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.703133 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.725484 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.743964 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.763473 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.783945 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.804730 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.824492 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.844649 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.863631 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.883731 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.903356 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.923543 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.943350 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.964385 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 13:34:11 crc kubenswrapper[4812]: I0216 13:34:11.983814 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.004629 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.023976 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.043282 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.063259 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.086682 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.103541 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.124673 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.141890 4812 request.go:700] Waited for 1.01810347s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmcc-proxy-tls&limit=500&resourceVersion=0 Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.146014 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.165353 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.184369 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.203574 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.223398 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.243921 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.264571 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.283284 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.303851 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.323167 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.343882 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.363697 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.383107 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.402935 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.423771 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.443981 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.463826 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.484844 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.503515 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.523262 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.544392 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.564347 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.583755 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.603870 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.622791 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.643067 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.663669 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.683964 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.703200 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.723965 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.751152 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.764744 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.785218 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.803793 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.825649 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.844834 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.865153 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.884077 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.904156 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.925222 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.943486 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.963964 4812 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 16 13:34:12 crc kubenswrapper[4812]: I0216 13:34:12.983395 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.003643 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.024079 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.049352 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.078923 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc99b\" (UniqueName: \"kubernetes.io/projected/c221ee5f-91c7-4ca7-9567-55cd7bd72beb-kube-api-access-jc99b\") pod \"downloads-7954f5f757-sv88f\" (UID: \"c221ee5f-91c7-4ca7-9567-55cd7bd72beb\") " pod="openshift-console/downloads-7954f5f757-sv88f" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.098942 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzhd7\" (UniqueName: \"kubernetes.io/projected/5245eea2-0039-4127-bd35-5d4ab5204b62-kube-api-access-mzhd7\") pod \"oauth-openshift-558db77b4-4mg2p\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.122241 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct7z2\" (UniqueName: \"kubernetes.io/projected/5be0ecd5-70de-4fa9-abcc-685cef55d530-kube-api-access-ct7z2\") pod \"controller-manager-879f6c89f-72lrh\" (UID: \"5be0ecd5-70de-4fa9-abcc-685cef55d530\") " pod="openshift-controller-manager/controller-manager-879f6c89f-72lrh" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.141008 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-72lrh" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.146147 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qslkx\" (UniqueName: \"kubernetes.io/projected/97376a54-e945-445a-b5fe-b2b658705dc5-kube-api-access-qslkx\") pod \"openshift-config-operator-7777fb866f-9ttfl\" (UID: \"97376a54-e945-445a-b5fe-b2b658705dc5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ttfl" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.159179 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdcp5\" (UniqueName: \"kubernetes.io/projected/d8f24d90-54d8-4344-8140-c9fa919b456a-kube-api-access-bdcp5\") pod \"console-f9d7485db-tpgqc\" (UID: \"d8f24d90-54d8-4344-8140-c9fa919b456a\") " pod="openshift-console/console-f9d7485db-tpgqc" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.161615 4812 request.go:700] Waited for 1.84328773s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/cluster-image-registry-operator/token Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.179767 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krtzh\" (UniqueName: \"kubernetes.io/projected/c91e8dc1-671e-4e1c-8e09-f2f8cec9cf34-kube-api-access-krtzh\") pod \"cluster-image-registry-operator-dc59b4c8b-v2s8n\" (UID: \"c91e8dc1-671e-4e1c-8e09-f2f8cec9cf34\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v2s8n" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.180031 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.187277 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-tpgqc" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.195747 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-sv88f" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.210369 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmndv\" (UniqueName: \"kubernetes.io/projected/18c621e3-e734-428a-9bf7-930f8d450c8e-kube-api-access-nmndv\") pod \"etcd-operator-b45778765-plrxx\" (UID: \"18c621e3-e734-428a-9bf7-930f8d450c8e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-plrxx" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.219595 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq29c\" (UniqueName: \"kubernetes.io/projected/a6a96223-8094-41a8-a311-231ef35ac6b2-kube-api-access-zq29c\") pod \"openshift-controller-manager-operator-756b6f6bc6-8mcbf\" (UID: \"a6a96223-8094-41a8-a311-231ef35ac6b2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8mcbf" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.241330 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n78zp\" (UniqueName: \"kubernetes.io/projected/c89a7bcd-9899-41bb-81c9-c1dc56f1fd6e-kube-api-access-n78zp\") pod \"cluster-samples-operator-665b6dd947-tnqj2\" (UID: \"c89a7bcd-9899-41bb-81c9-c1dc56f1fd6e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tnqj2" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.259920 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bppch\" (UniqueName: \"kubernetes.io/projected/7f4d6c63-7c73-4fae-8738-04def1b3e5e3-kube-api-access-bppch\") pod \"apiserver-76f77b778f-wc6pn\" (UID: \"7f4d6c63-7c73-4fae-8738-04def1b3e5e3\") " pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.279258 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c91e8dc1-671e-4e1c-8e09-f2f8cec9cf34-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-v2s8n\" (UID: \"c91e8dc1-671e-4e1c-8e09-f2f8cec9cf34\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v2s8n" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.303257 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9xcx\" (UniqueName: \"kubernetes.io/projected/cabaed27-8848-4061-9644-ff60ca94389c-kube-api-access-t9xcx\") pod \"authentication-operator-69f744f599-7wvg2\" (UID: \"cabaed27-8848-4061-9644-ff60ca94389c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7wvg2" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.319063 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blvbj\" (UniqueName: \"kubernetes.io/projected/1f368631-7f8d-4004-a36c-38cb52391cb4-kube-api-access-blvbj\") pod \"apiserver-7bbb656c7d-gb9bh\" (UID: \"1f368631-7f8d-4004-a36c-38cb52391cb4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gb9bh" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.339175 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lsb5\" (UniqueName: \"kubernetes.io/projected/ae5e2f5e-3826-45ce-a7af-a2670fcc41ab-kube-api-access-2lsb5\") pod \"console-operator-58897d9998-nplvk\" (UID: \"ae5e2f5e-3826-45ce-a7af-a2670fcc41ab\") " pod="openshift-console-operator/console-operator-58897d9998-nplvk" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.350719 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ttfl" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.357240 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gb9bh" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.360482 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9437d039-7efe-4e41-810c-2cf9c324ae08-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-h69cg\" (UID: \"9437d039-7efe-4e41-810c-2cf9c324ae08\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h69cg" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.365997 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-nplvk" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.388142 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tnqj2" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.407915 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4mg2p"] Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.452848 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.456916 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8mcbf" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.457639 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d90f788-7a7d-4ccd-9e4c-8a0fdcfdd479-config\") pod \"machine-api-operator-5694c8668f-4cx9t\" (UID: \"3d90f788-7a7d-4ccd-9e4c-8a0fdcfdd479\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4cx9t" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.457668 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dcfa0e5-1712-4411-afe5-e922c185b120-config\") pod \"route-controller-manager-6576b87f9c-5sss2\" (UID: \"1dcfa0e5-1712-4411-afe5-e922c185b120\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5sss2" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.457688 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8f6a70e5-ea2c-431f-b749-bab49aa63442-machine-approver-tls\") pod \"machine-approver-56656f9798-942n4\" (UID: \"8f6a70e5-ea2c-431f-b749-bab49aa63442\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-942n4" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.457704 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7-ca-trust-extracted\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.457719 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3d90f788-7a7d-4ccd-9e4c-8a0fdcfdd479-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4cx9t\" (UID: \"3d90f788-7a7d-4ccd-9e4c-8a0fdcfdd479\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4cx9t" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.457736 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7-bound-sa-token\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.457939 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9vl6\" (UniqueName: \"kubernetes.io/projected/3d90f788-7a7d-4ccd-9e4c-8a0fdcfdd479-kube-api-access-g9vl6\") pod \"machine-api-operator-5694c8668f-4cx9t\" (UID: \"3d90f788-7a7d-4ccd-9e4c-8a0fdcfdd479\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4cx9t" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.457966 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7-registry-certificates\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.457982 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7-trusted-ca\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.458012 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8258\" (UniqueName: \"kubernetes.io/projected/f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7-kube-api-access-p8258\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.458918 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dcfa0e5-1712-4411-afe5-e922c185b120-serving-cert\") pod \"route-controller-manager-6576b87f9c-5sss2\" (UID: \"1dcfa0e5-1712-4411-afe5-e922c185b120\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5sss2" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.459020 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnklk\" (UniqueName: \"kubernetes.io/projected/9d7cab04-b239-4a4c-b3da-ba280200cd57-kube-api-access-fnklk\") pod \"openshift-apiserver-operator-796bbdcf4f-8ngmk\" (UID: \"9d7cab04-b239-4a4c-b3da-ba280200cd57\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8ngmk" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.459067 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3d90f788-7a7d-4ccd-9e4c-8a0fdcfdd479-images\") pod \"machine-api-operator-5694c8668f-4cx9t\" (UID: \"3d90f788-7a7d-4ccd-9e4c-8a0fdcfdd479\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4cx9t" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.459089 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7-registry-tls\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.459119 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1dcfa0e5-1712-4411-afe5-e922c185b120-client-ca\") pod \"route-controller-manager-6576b87f9c-5sss2\" (UID: \"1dcfa0e5-1712-4411-afe5-e922c185b120\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5sss2" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.459145 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8f6a70e5-ea2c-431f-b749-bab49aa63442-auth-proxy-config\") pod \"machine-approver-56656f9798-942n4\" (UID: \"8f6a70e5-ea2c-431f-b749-bab49aa63442\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-942n4" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.459171 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d7cab04-b239-4a4c-b3da-ba280200cd57-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8ngmk\" (UID: \"9d7cab04-b239-4a4c-b3da-ba280200cd57\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8ngmk" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.459198 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz2f4\" (UniqueName: \"kubernetes.io/projected/8f6a70e5-ea2c-431f-b749-bab49aa63442-kube-api-access-sz2f4\") pod \"machine-approver-56656f9798-942n4\" (UID: \"8f6a70e5-ea2c-431f-b749-bab49aa63442\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-942n4" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.459286 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.459318 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d7cab04-b239-4a4c-b3da-ba280200cd57-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8ngmk\" (UID: \"9d7cab04-b239-4a4c-b3da-ba280200cd57\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8ngmk" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.459339 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84w29\" (UniqueName: \"kubernetes.io/projected/1dcfa0e5-1712-4411-afe5-e922c185b120-kube-api-access-84w29\") pod \"route-controller-manager-6576b87f9c-5sss2\" (UID: \"1dcfa0e5-1712-4411-afe5-e922c185b120\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5sss2" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.459354 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f6a70e5-ea2c-431f-b749-bab49aa63442-config\") pod \"machine-approver-56656f9798-942n4\" (UID: \"8f6a70e5-ea2c-431f-b749-bab49aa63442\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-942n4" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.459371 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7-installation-pull-secrets\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:13 crc kubenswrapper[4812]: E0216 13:34:13.461049 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:13.960961986 +0000 UTC m=+143.025292687 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.464996 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-7wvg2" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.510701 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-plrxx" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.523920 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v2s8n" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.529771 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h69cg" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.560987 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.561409 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84w29\" (UniqueName: \"kubernetes.io/projected/1dcfa0e5-1712-4411-afe5-e922c185b120-kube-api-access-84w29\") pod \"route-controller-manager-6576b87f9c-5sss2\" (UID: \"1dcfa0e5-1712-4411-afe5-e922c185b120\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5sss2" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.561432 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f6a70e5-ea2c-431f-b749-bab49aa63442-config\") pod \"machine-approver-56656f9798-942n4\" (UID: \"8f6a70e5-ea2c-431f-b749-bab49aa63442\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-942n4" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.561471 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/939c437d-1347-489e-bb4a-1b783a62d707-node-bootstrap-token\") pod \"machine-config-server-qfjbm\" (UID: \"939c437d-1347-489e-bb4a-1b783a62d707\") " pod="openshift-machine-config-operator/machine-config-server-qfjbm" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.561488 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqqc7\" (UniqueName: \"kubernetes.io/projected/057b7c44-3b11-4a03-8325-0b3819b55f6f-kube-api-access-dqqc7\") pod \"packageserver-d55dfcdfc-6n6fc\" (UID: \"057b7c44-3b11-4a03-8325-0b3819b55f6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6n6fc" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.561504 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2ece23bb-e939-4912-99fc-ea54a7c7336e-metrics-tls\") pod \"ingress-operator-5b745b69d9-ll6fj\" (UID: \"2ece23bb-e939-4912-99fc-ea54a7c7336e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ll6fj" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.561522 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7-installation-pull-secrets\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.561539 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/939c437d-1347-489e-bb4a-1b783a62d707-certs\") pod \"machine-config-server-qfjbm\" (UID: \"939c437d-1347-489e-bb4a-1b783a62d707\") " pod="openshift-machine-config-operator/machine-config-server-qfjbm" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.561555 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/057b7c44-3b11-4a03-8325-0b3819b55f6f-apiservice-cert\") pod \"packageserver-d55dfcdfc-6n6fc\" (UID: \"057b7c44-3b11-4a03-8325-0b3819b55f6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6n6fc" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.561574 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg95r\" (UniqueName: \"kubernetes.io/projected/88330cc0-3dd3-4ff7-8661-6a79d3e1667a-kube-api-access-vg95r\") pod \"machine-config-operator-74547568cd-gjtcl\" (UID: \"88330cc0-3dd3-4ff7-8661-6a79d3e1667a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gjtcl" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.561590 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d34aa26a-9b3b-463d-bea6-be2d12b5854c-mountpoint-dir\") pod \"csi-hostpathplugin-dqt5h\" (UID: \"d34aa26a-9b3b-463d-bea6-be2d12b5854c\") " pod="hostpath-provisioner/csi-hostpathplugin-dqt5h" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.561640 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp678\" (UniqueName: \"kubernetes.io/projected/9c80fa26-a106-41fa-b66d-53954e1b233b-kube-api-access-kp678\") pod \"olm-operator-6b444d44fb-g8zmc\" (UID: \"9c80fa26-a106-41fa-b66d-53954e1b233b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g8zmc" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.561670 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5bd1b4d8-80f4-4044-891b-a5e3450a0f48-stats-auth\") pod \"router-default-5444994796-b7psd\" (UID: \"5bd1b4d8-80f4-4044-891b-a5e3450a0f48\") " pod="openshift-ingress/router-default-5444994796-b7psd" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.561683 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9c80fa26-a106-41fa-b66d-53954e1b233b-srv-cert\") pod \"olm-operator-6b444d44fb-g8zmc\" (UID: \"9c80fa26-a106-41fa-b66d-53954e1b233b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g8zmc" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.561699 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3f70bd4-a15e-44dd-a610-ee085e108403-config-volume\") pod \"dns-default-pxwbj\" (UID: \"a3f70bd4-a15e-44dd-a610-ee085e108403\") " pod="openshift-dns/dns-default-pxwbj" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.561722 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb67b\" (UniqueName: \"kubernetes.io/projected/08ecaa84-5c71-4570-8f7f-d753d2eeb9ab-kube-api-access-wb67b\") pod \"kube-storage-version-migrator-operator-b67b599dd-j84fb\" (UID: \"08ecaa84-5c71-4570-8f7f-d753d2eeb9ab\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j84fb" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.561739 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d34aa26a-9b3b-463d-bea6-be2d12b5854c-plugins-dir\") pod \"csi-hostpathplugin-dqt5h\" (UID: \"d34aa26a-9b3b-463d-bea6-be2d12b5854c\") " pod="hostpath-provisioner/csi-hostpathplugin-dqt5h" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.561763 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmc92\" (UniqueName: \"kubernetes.io/projected/2ece23bb-e939-4912-99fc-ea54a7c7336e-kube-api-access-cmc92\") pod \"ingress-operator-5b745b69d9-ll6fj\" (UID: \"2ece23bb-e939-4912-99fc-ea54a7c7336e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ll6fj" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.561792 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8f6a70e5-ea2c-431f-b749-bab49aa63442-machine-approver-tls\") pod \"machine-approver-56656f9798-942n4\" (UID: \"8f6a70e5-ea2c-431f-b749-bab49aa63442\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-942n4" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.561807 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8dba5fdd-d62c-41c5-9550-d98118b3b1a1-metrics-tls\") pod \"dns-operator-744455d44c-w525k\" (UID: \"8dba5fdd-d62c-41c5-9550-d98118b3b1a1\") " pod="openshift-dns-operator/dns-operator-744455d44c-w525k" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.561822 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/110cad9b-2348-4b66-a432-4461f3bd77c6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wknkb\" (UID: \"110cad9b-2348-4b66-a432-4461f3bd77c6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wknkb" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.561850 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7-ca-trust-extracted\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.561867 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3d90f788-7a7d-4ccd-9e4c-8a0fdcfdd479-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4cx9t\" (UID: \"3d90f788-7a7d-4ccd-9e4c-8a0fdcfdd479\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4cx9t" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.561909 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf9sx\" (UniqueName: \"kubernetes.io/projected/939c437d-1347-489e-bb4a-1b783a62d707-kube-api-access-tf9sx\") pod \"machine-config-server-qfjbm\" (UID: \"939c437d-1347-489e-bb4a-1b783a62d707\") " pod="openshift-machine-config-operator/machine-config-server-qfjbm" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.561930 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a3c1e652-a321-42b1-b658-624e89a01eb3-cert\") pod \"ingress-canary-6c7g6\" (UID: \"a3c1e652-a321-42b1-b658-624e89a01eb3\") " pod="openshift-ingress-canary/ingress-canary-6c7g6" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.561953 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7be72ca7-b8da-4034-b6e0-16218c2e793e-serving-cert\") pod \"service-ca-operator-777779d784-jrx24\" (UID: \"7be72ca7-b8da-4034-b6e0-16218c2e793e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jrx24" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.561974 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/809592c1-c9ad-49f0-90a6-cea3bbebf136-signing-cabundle\") pod \"service-ca-9c57cc56f-h57x4\" (UID: \"809592c1-c9ad-49f0-90a6-cea3bbebf136\") " pod="openshift-service-ca/service-ca-9c57cc56f-h57x4" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.562002 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9vl6\" (UniqueName: \"kubernetes.io/projected/3d90f788-7a7d-4ccd-9e4c-8a0fdcfdd479-kube-api-access-g9vl6\") pod \"machine-api-operator-5694c8668f-4cx9t\" (UID: \"3d90f788-7a7d-4ccd-9e4c-8a0fdcfdd479\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4cx9t" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.562016 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5bd1b4d8-80f4-4044-891b-a5e3450a0f48-metrics-certs\") pod \"router-default-5444994796-b7psd\" (UID: \"5bd1b4d8-80f4-4044-891b-a5e3450a0f48\") " pod="openshift-ingress/router-default-5444994796-b7psd" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.562032 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08ecaa84-5c71-4570-8f7f-d753d2eeb9ab-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-j84fb\" (UID: \"08ecaa84-5c71-4570-8f7f-d753d2eeb9ab\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j84fb" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.562058 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdbbn\" (UniqueName: \"kubernetes.io/projected/d34aa26a-9b3b-463d-bea6-be2d12b5854c-kube-api-access-mdbbn\") pod \"csi-hostpathplugin-dqt5h\" (UID: \"d34aa26a-9b3b-463d-bea6-be2d12b5854c\") " pod="hostpath-provisioner/csi-hostpathplugin-dqt5h" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.562073 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2ece23bb-e939-4912-99fc-ea54a7c7336e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-ll6fj\" (UID: \"2ece23bb-e939-4912-99fc-ea54a7c7336e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ll6fj" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.562105 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a3f70bd4-a15e-44dd-a610-ee085e108403-metrics-tls\") pod \"dns-default-pxwbj\" (UID: \"a3f70bd4-a15e-44dd-a610-ee085e108403\") " pod="openshift-dns/dns-default-pxwbj" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.562136 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/110cad9b-2348-4b66-a432-4461f3bd77c6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wknkb\" (UID: \"110cad9b-2348-4b66-a432-4461f3bd77c6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wknkb" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.562163 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7-registry-tls\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.562178 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7be72ca7-b8da-4034-b6e0-16218c2e793e-config\") pod \"service-ca-operator-777779d784-jrx24\" (UID: \"7be72ca7-b8da-4034-b6e0-16218c2e793e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jrx24" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.562195 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8f6a70e5-ea2c-431f-b749-bab49aa63442-auth-proxy-config\") pod \"machine-approver-56656f9798-942n4\" (UID: \"8f6a70e5-ea2c-431f-b749-bab49aa63442\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-942n4" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.562211 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/057b7c44-3b11-4a03-8325-0b3819b55f6f-webhook-cert\") pod \"packageserver-d55dfcdfc-6n6fc\" (UID: \"057b7c44-3b11-4a03-8325-0b3819b55f6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6n6fc" Feb 16 13:34:13 crc kubenswrapper[4812]: E0216 13:34:13.562629 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:14.062598942 +0000 UTC m=+143.126929643 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.563393 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt5cx\" (UniqueName: \"kubernetes.io/projected/b26542fa-2c38-47d7-984b-e51679e600c4-kube-api-access-gt5cx\") pod \"catalog-operator-68c6474976-2hqkd\" (UID: \"b26542fa-2c38-47d7-984b-e51679e600c4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2hqkd" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.563423 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d7cab04-b239-4a4c-b3da-ba280200cd57-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8ngmk\" (UID: \"9d7cab04-b239-4a4c-b3da-ba280200cd57\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8ngmk" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.563304 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f6a70e5-ea2c-431f-b749-bab49aa63442-config\") pod \"machine-approver-56656f9798-942n4\" (UID: \"8f6a70e5-ea2c-431f-b749-bab49aa63442\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-942n4" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.563462 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw89n\" (UniqueName: \"kubernetes.io/projected/fca937fd-eef1-4f91-b825-18d5429526a9-kube-api-access-cw89n\") pod \"collect-profiles-29520810-vq6f4\" (UID: \"fca937fd-eef1-4f91-b825-18d5429526a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-vq6f4" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.563546 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz2f4\" (UniqueName: \"kubernetes.io/projected/8f6a70e5-ea2c-431f-b749-bab49aa63442-kube-api-access-sz2f4\") pod \"machine-approver-56656f9798-942n4\" (UID: \"8f6a70e5-ea2c-431f-b749-bab49aa63442\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-942n4" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.563651 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d34aa26a-9b3b-463d-bea6-be2d12b5854c-csi-data-dir\") pod \"csi-hostpathplugin-dqt5h\" (UID: \"d34aa26a-9b3b-463d-bea6-be2d12b5854c\") " pod="hostpath-provisioner/csi-hostpathplugin-dqt5h" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.563707 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d7cab04-b239-4a4c-b3da-ba280200cd57-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8ngmk\" (UID: \"9d7cab04-b239-4a4c-b3da-ba280200cd57\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8ngmk" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.563840 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgjtd\" (UniqueName: \"kubernetes.io/projected/7be72ca7-b8da-4034-b6e0-16218c2e793e-kube-api-access-bgjtd\") pod \"service-ca-operator-777779d784-jrx24\" (UID: \"7be72ca7-b8da-4034-b6e0-16218c2e793e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jrx24" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.563883 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/84482702-f4be-41ce-98c6-eb5161d23ba0-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-b5crl\" (UID: \"84482702-f4be-41ce-98c6-eb5161d23ba0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b5crl" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.563936 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/110cad9b-2348-4b66-a432-4461f3bd77c6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wknkb\" (UID: \"110cad9b-2348-4b66-a432-4461f3bd77c6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wknkb" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.563976 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg44q\" (UniqueName: \"kubernetes.io/projected/a3f70bd4-a15e-44dd-a610-ee085e108403-kube-api-access-qg44q\") pod \"dns-default-pxwbj\" (UID: \"a3f70bd4-a15e-44dd-a610-ee085e108403\") " pod="openshift-dns/dns-default-pxwbj" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.563998 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c03df561-6085-44d5-a33c-3c01a749858e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-v7f6l\" (UID: \"c03df561-6085-44d5-a33c-3c01a749858e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v7f6l" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.564021 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/db15826a-b0d8-4fb5-9a69-35ae6888b029-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tkjwb\" (UID: \"db15826a-b0d8-4fb5-9a69-35ae6888b029\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tkjwb" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.564046 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/13167121-9190-4ef3-b635-d528457b4c53-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-nwjmg\" (UID: \"13167121-9190-4ef3-b635-d528457b4c53\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nwjmg" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.564160 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khs8b\" (UniqueName: \"kubernetes.io/projected/13167121-9190-4ef3-b635-d528457b4c53-kube-api-access-khs8b\") pod \"multus-admission-controller-857f4d67dd-nwjmg\" (UID: \"13167121-9190-4ef3-b635-d528457b4c53\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nwjmg" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.564185 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9876\" (UniqueName: \"kubernetes.io/projected/8dba5fdd-d62c-41c5-9550-d98118b3b1a1-kube-api-access-j9876\") pod \"dns-operator-744455d44c-w525k\" (UID: \"8dba5fdd-d62c-41c5-9550-d98118b3b1a1\") " pod="openshift-dns-operator/dns-operator-744455d44c-w525k" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.564227 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vb8x\" (UniqueName: \"kubernetes.io/projected/84482702-f4be-41ce-98c6-eb5161d23ba0-kube-api-access-4vb8x\") pod \"package-server-manager-789f6589d5-b5crl\" (UID: \"84482702-f4be-41ce-98c6-eb5161d23ba0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b5crl" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.564370 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c03df561-6085-44d5-a33c-3c01a749858e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-v7f6l\" (UID: \"c03df561-6085-44d5-a33c-3c01a749858e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v7f6l" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.564422 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/89281b9f-7c51-470c-aa86-bdfd398f2a2a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-klckm\" (UID: \"89281b9f-7c51-470c-aa86-bdfd398f2a2a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-klckm" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.564468 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fca937fd-eef1-4f91-b825-18d5429526a9-config-volume\") pod \"collect-profiles-29520810-vq6f4\" (UID: \"fca937fd-eef1-4f91-b825-18d5429526a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-vq6f4" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.564546 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d90f788-7a7d-4ccd-9e4c-8a0fdcfdd479-config\") pod \"machine-api-operator-5694c8668f-4cx9t\" (UID: \"3d90f788-7a7d-4ccd-9e4c-8a0fdcfdd479\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4cx9t" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.564570 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/88330cc0-3dd3-4ff7-8661-6a79d3e1667a-images\") pod \"machine-config-operator-74547568cd-gjtcl\" (UID: \"88330cc0-3dd3-4ff7-8661-6a79d3e1667a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gjtcl" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.564619 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dcfa0e5-1712-4411-afe5-e922c185b120-config\") pod \"route-controller-manager-6576b87f9c-5sss2\" (UID: \"1dcfa0e5-1712-4411-afe5-e922c185b120\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5sss2" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.564639 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2ece23bb-e939-4912-99fc-ea54a7c7336e-trusted-ca\") pod \"ingress-operator-5b745b69d9-ll6fj\" (UID: \"2ece23bb-e939-4912-99fc-ea54a7c7336e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ll6fj" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.565117 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7-ca-trust-extracted\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.566352 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3d90f788-7a7d-4ccd-9e4c-8a0fdcfdd479-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4cx9t\" (UID: \"3d90f788-7a7d-4ccd-9e4c-8a0fdcfdd479\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4cx9t" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.567248 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d7cab04-b239-4a4c-b3da-ba280200cd57-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8ngmk\" (UID: \"9d7cab04-b239-4a4c-b3da-ba280200cd57\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8ngmk" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.567382 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b26542fa-2c38-47d7-984b-e51679e600c4-srv-cert\") pod \"catalog-operator-68c6474976-2hqkd\" (UID: \"b26542fa-2c38-47d7-984b-e51679e600c4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2hqkd" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.567573 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c03df561-6085-44d5-a33c-3c01a749858e-config\") pod \"kube-controller-manager-operator-78b949d7b-v7f6l\" (UID: \"c03df561-6085-44d5-a33c-3c01a749858e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v7f6l" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.567638 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7-bound-sa-token\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.567688 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/db15826a-b0d8-4fb5-9a69-35ae6888b029-proxy-tls\") pod \"machine-config-controller-84d6567774-tkjwb\" (UID: \"db15826a-b0d8-4fb5-9a69-35ae6888b029\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tkjwb" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.567980 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9c80fa26-a106-41fa-b66d-53954e1b233b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-g8zmc\" (UID: \"9c80fa26-a106-41fa-b66d-53954e1b233b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g8zmc" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.568086 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8f6a70e5-ea2c-431f-b749-bab49aa63442-auth-proxy-config\") pod \"machine-approver-56656f9798-942n4\" (UID: \"8f6a70e5-ea2c-431f-b749-bab49aa63442\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-942n4" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.568271 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7-registry-certificates\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.568652 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7-trusted-ca\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.568720 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/88330cc0-3dd3-4ff7-8661-6a79d3e1667a-proxy-tls\") pod \"machine-config-operator-74547568cd-gjtcl\" (UID: \"88330cc0-3dd3-4ff7-8661-6a79d3e1667a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gjtcl" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.568828 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d34aa26a-9b3b-463d-bea6-be2d12b5854c-socket-dir\") pod \"csi-hostpathplugin-dqt5h\" (UID: \"d34aa26a-9b3b-463d-bea6-be2d12b5854c\") " pod="hostpath-provisioner/csi-hostpathplugin-dqt5h" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.568900 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/88330cc0-3dd3-4ff7-8661-6a79d3e1667a-auth-proxy-config\") pod \"machine-config-operator-74547568cd-gjtcl\" (UID: \"88330cc0-3dd3-4ff7-8661-6a79d3e1667a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gjtcl" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.568957 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr5km\" (UniqueName: \"kubernetes.io/projected/89281b9f-7c51-470c-aa86-bdfd398f2a2a-kube-api-access-hr5km\") pod \"control-plane-machine-set-operator-78cbb6b69f-klckm\" (UID: \"89281b9f-7c51-470c-aa86-bdfd398f2a2a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-klckm" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.569188 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d90f788-7a7d-4ccd-9e4c-8a0fdcfdd479-config\") pod \"machine-api-operator-5694c8668f-4cx9t\" (UID: \"3d90f788-7a7d-4ccd-9e4c-8a0fdcfdd479\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4cx9t" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.569646 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7-registry-tls\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.569778 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d34aa26a-9b3b-463d-bea6-be2d12b5854c-registration-dir\") pod \"csi-hostpathplugin-dqt5h\" (UID: \"d34aa26a-9b3b-463d-bea6-be2d12b5854c\") " pod="hostpath-provisioner/csi-hostpathplugin-dqt5h" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.569814 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/809592c1-c9ad-49f0-90a6-cea3bbebf136-signing-key\") pod \"service-ca-9c57cc56f-h57x4\" (UID: \"809592c1-c9ad-49f0-90a6-cea3bbebf136\") " pod="openshift-service-ca/service-ca-9c57cc56f-h57x4" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.569935 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8258\" (UniqueName: \"kubernetes.io/projected/f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7-kube-api-access-p8258\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.570078 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dcfa0e5-1712-4411-afe5-e922c185b120-serving-cert\") pod \"route-controller-manager-6576b87f9c-5sss2\" (UID: \"1dcfa0e5-1712-4411-afe5-e922c185b120\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5sss2" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.570218 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08ecaa84-5c71-4570-8f7f-d753d2eeb9ab-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-j84fb\" (UID: \"08ecaa84-5c71-4570-8f7f-d753d2eeb9ab\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j84fb" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.570244 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fca937fd-eef1-4f91-b825-18d5429526a9-secret-volume\") pod \"collect-profiles-29520810-vq6f4\" (UID: \"fca937fd-eef1-4f91-b825-18d5429526a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-vq6f4" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.570276 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4l74\" (UniqueName: \"kubernetes.io/projected/7147a2f9-6f8c-4fa5-b6da-a6a67a53e231-kube-api-access-b4l74\") pod \"marketplace-operator-79b997595-kc7dg\" (UID: \"7147a2f9-6f8c-4fa5-b6da-a6a67a53e231\") " pod="openshift-marketplace/marketplace-operator-79b997595-kc7dg" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.570378 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/057b7c44-3b11-4a03-8325-0b3819b55f6f-tmpfs\") pod \"packageserver-d55dfcdfc-6n6fc\" (UID: \"057b7c44-3b11-4a03-8325-0b3819b55f6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6n6fc" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.571055 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3d90f788-7a7d-4ccd-9e4c-8a0fdcfdd479-images\") pod \"machine-api-operator-5694c8668f-4cx9t\" (UID: \"3d90f788-7a7d-4ccd-9e4c-8a0fdcfdd479\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4cx9t" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.571092 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnklk\" (UniqueName: \"kubernetes.io/projected/9d7cab04-b239-4a4c-b3da-ba280200cd57-kube-api-access-fnklk\") pod \"openshift-apiserver-operator-796bbdcf4f-8ngmk\" (UID: \"9d7cab04-b239-4a4c-b3da-ba280200cd57\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8ngmk" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.571117 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5bd1b4d8-80f4-4044-891b-a5e3450a0f48-default-certificate\") pod \"router-default-5444994796-b7psd\" (UID: \"5bd1b4d8-80f4-4044-891b-a5e3450a0f48\") " pod="openshift-ingress/router-default-5444994796-b7psd" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.571153 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft4vf\" (UniqueName: \"kubernetes.io/projected/9cf0c1ed-445f-4f9c-a8c3-c903d559de4d-kube-api-access-ft4vf\") pod \"migrator-59844c95c7-46895\" (UID: \"9cf0c1ed-445f-4f9c-a8c3-c903d559de4d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-46895" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.571176 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bd1b4d8-80f4-4044-891b-a5e3450a0f48-service-ca-bundle\") pod \"router-default-5444994796-b7psd\" (UID: \"5bd1b4d8-80f4-4044-891b-a5e3450a0f48\") " pod="openshift-ingress/router-default-5444994796-b7psd" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.571191 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b26542fa-2c38-47d7-984b-e51679e600c4-profile-collector-cert\") pod \"catalog-operator-68c6474976-2hqkd\" (UID: \"b26542fa-2c38-47d7-984b-e51679e600c4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2hqkd" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.571259 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1dcfa0e5-1712-4411-afe5-e922c185b120-client-ca\") pod \"route-controller-manager-6576b87f9c-5sss2\" (UID: \"1dcfa0e5-1712-4411-afe5-e922c185b120\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5sss2" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.571302 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7147a2f9-6f8c-4fa5-b6da-a6a67a53e231-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kc7dg\" (UID: \"7147a2f9-6f8c-4fa5-b6da-a6a67a53e231\") " pod="openshift-marketplace/marketplace-operator-79b997595-kc7dg" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.571358 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vswk7\" (UniqueName: \"kubernetes.io/projected/db15826a-b0d8-4fb5-9a69-35ae6888b029-kube-api-access-vswk7\") pod \"machine-config-controller-84d6567774-tkjwb\" (UID: \"db15826a-b0d8-4fb5-9a69-35ae6888b029\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tkjwb" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.571362 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7-registry-certificates\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.571500 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.571772 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3d90f788-7a7d-4ccd-9e4c-8a0fdcfdd479-images\") pod \"machine-api-operator-5694c8668f-4cx9t\" (UID: \"3d90f788-7a7d-4ccd-9e4c-8a0fdcfdd479\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4cx9t" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.572172 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtcl6\" (UniqueName: \"kubernetes.io/projected/809592c1-c9ad-49f0-90a6-cea3bbebf136-kube-api-access-vtcl6\") pod \"service-ca-9c57cc56f-h57x4\" (UID: \"809592c1-c9ad-49f0-90a6-cea3bbebf136\") " pod="openshift-service-ca/service-ca-9c57cc56f-h57x4" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.572222 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dcfa0e5-1712-4411-afe5-e922c185b120-config\") pod \"route-controller-manager-6576b87f9c-5sss2\" (UID: \"1dcfa0e5-1712-4411-afe5-e922c185b120\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5sss2" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.572395 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7stz\" (UniqueName: \"kubernetes.io/projected/a3c1e652-a321-42b1-b658-624e89a01eb3-kube-api-access-x7stz\") pod \"ingress-canary-6c7g6\" (UID: \"a3c1e652-a321-42b1-b658-624e89a01eb3\") " pod="openshift-ingress-canary/ingress-canary-6c7g6" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.572538 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7-trusted-ca\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:13 crc kubenswrapper[4812]: E0216 13:34:13.572597 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:14.072583072 +0000 UTC m=+143.136913773 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.572668 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz5pd\" (UniqueName: \"kubernetes.io/projected/5bd1b4d8-80f4-4044-891b-a5e3450a0f48-kube-api-access-dz5pd\") pod \"router-default-5444994796-b7psd\" (UID: \"5bd1b4d8-80f4-4044-891b-a5e3450a0f48\") " pod="openshift-ingress/router-default-5444994796-b7psd" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.572691 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7147a2f9-6f8c-4fa5-b6da-a6a67a53e231-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kc7dg\" (UID: \"7147a2f9-6f8c-4fa5-b6da-a6a67a53e231\") " pod="openshift-marketplace/marketplace-operator-79b997595-kc7dg" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.572810 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7-installation-pull-secrets\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.573463 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1dcfa0e5-1712-4411-afe5-e922c185b120-client-ca\") pod \"route-controller-manager-6576b87f9c-5sss2\" (UID: \"1dcfa0e5-1712-4411-afe5-e922c185b120\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5sss2" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.575622 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8f6a70e5-ea2c-431f-b749-bab49aa63442-machine-approver-tls\") pod \"machine-approver-56656f9798-942n4\" (UID: \"8f6a70e5-ea2c-431f-b749-bab49aa63442\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-942n4" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.576947 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d7cab04-b239-4a4c-b3da-ba280200cd57-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8ngmk\" (UID: \"9d7cab04-b239-4a4c-b3da-ba280200cd57\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8ngmk" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.578001 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dcfa0e5-1712-4411-afe5-e922c185b120-serving-cert\") pod \"route-controller-manager-6576b87f9c-5sss2\" (UID: \"1dcfa0e5-1712-4411-afe5-e922c185b120\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5sss2" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.613285 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" event={"ID":"5245eea2-0039-4127-bd35-5d4ab5204b62","Type":"ContainerStarted","Data":"c3260a47512d7c85f003ed16b14f1f86afe94056c98c9c3ea0a8db4ffa5b52a5"} Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.613425 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9vl6\" (UniqueName: \"kubernetes.io/projected/3d90f788-7a7d-4ccd-9e4c-8a0fdcfdd479-kube-api-access-g9vl6\") pod \"machine-api-operator-5694c8668f-4cx9t\" (UID: \"3d90f788-7a7d-4ccd-9e4c-8a0fdcfdd479\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4cx9t" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.614424 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9ttfl"] Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.635397 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84w29\" (UniqueName: \"kubernetes.io/projected/1dcfa0e5-1712-4411-afe5-e922c185b120-kube-api-access-84w29\") pod \"route-controller-manager-6576b87f9c-5sss2\" (UID: \"1dcfa0e5-1712-4411-afe5-e922c185b120\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5sss2" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.642626 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz2f4\" (UniqueName: \"kubernetes.io/projected/8f6a70e5-ea2c-431f-b749-bab49aa63442-kube-api-access-sz2f4\") pod \"machine-approver-56656f9798-942n4\" (UID: \"8f6a70e5-ea2c-431f-b749-bab49aa63442\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-942n4" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.652871 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-sv88f"] Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.654839 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-72lrh"] Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.670205 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7-bound-sa-token\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.674465 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.674635 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kp678\" (UniqueName: \"kubernetes.io/projected/9c80fa26-a106-41fa-b66d-53954e1b233b-kube-api-access-kp678\") pod \"olm-operator-6b444d44fb-g8zmc\" (UID: \"9c80fa26-a106-41fa-b66d-53954e1b233b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g8zmc" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.674675 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5bd1b4d8-80f4-4044-891b-a5e3450a0f48-stats-auth\") pod \"router-default-5444994796-b7psd\" (UID: \"5bd1b4d8-80f4-4044-891b-a5e3450a0f48\") " pod="openshift-ingress/router-default-5444994796-b7psd" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.674691 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9c80fa26-a106-41fa-b66d-53954e1b233b-srv-cert\") pod \"olm-operator-6b444d44fb-g8zmc\" (UID: \"9c80fa26-a106-41fa-b66d-53954e1b233b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g8zmc" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.674709 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3f70bd4-a15e-44dd-a610-ee085e108403-config-volume\") pod \"dns-default-pxwbj\" (UID: \"a3f70bd4-a15e-44dd-a610-ee085e108403\") " pod="openshift-dns/dns-default-pxwbj" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.674728 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb67b\" (UniqueName: \"kubernetes.io/projected/08ecaa84-5c71-4570-8f7f-d753d2eeb9ab-kube-api-access-wb67b\") pod \"kube-storage-version-migrator-operator-b67b599dd-j84fb\" (UID: \"08ecaa84-5c71-4570-8f7f-d753d2eeb9ab\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j84fb" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.674744 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d34aa26a-9b3b-463d-bea6-be2d12b5854c-plugins-dir\") pod \"csi-hostpathplugin-dqt5h\" (UID: \"d34aa26a-9b3b-463d-bea6-be2d12b5854c\") " pod="hostpath-provisioner/csi-hostpathplugin-dqt5h" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.674760 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmc92\" (UniqueName: \"kubernetes.io/projected/2ece23bb-e939-4912-99fc-ea54a7c7336e-kube-api-access-cmc92\") pod \"ingress-operator-5b745b69d9-ll6fj\" (UID: \"2ece23bb-e939-4912-99fc-ea54a7c7336e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ll6fj" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.674803 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8dba5fdd-d62c-41c5-9550-d98118b3b1a1-metrics-tls\") pod \"dns-operator-744455d44c-w525k\" (UID: \"8dba5fdd-d62c-41c5-9550-d98118b3b1a1\") " pod="openshift-dns-operator/dns-operator-744455d44c-w525k" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.674837 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/110cad9b-2348-4b66-a432-4461f3bd77c6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wknkb\" (UID: \"110cad9b-2348-4b66-a432-4461f3bd77c6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wknkb" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.674885 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf9sx\" (UniqueName: \"kubernetes.io/projected/939c437d-1347-489e-bb4a-1b783a62d707-kube-api-access-tf9sx\") pod \"machine-config-server-qfjbm\" (UID: \"939c437d-1347-489e-bb4a-1b783a62d707\") " pod="openshift-machine-config-operator/machine-config-server-qfjbm" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.674909 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a3c1e652-a321-42b1-b658-624e89a01eb3-cert\") pod \"ingress-canary-6c7g6\" (UID: \"a3c1e652-a321-42b1-b658-624e89a01eb3\") " pod="openshift-ingress-canary/ingress-canary-6c7g6" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.674931 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7be72ca7-b8da-4034-b6e0-16218c2e793e-serving-cert\") pod \"service-ca-operator-777779d784-jrx24\" (UID: \"7be72ca7-b8da-4034-b6e0-16218c2e793e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jrx24" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.674952 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5bd1b4d8-80f4-4044-891b-a5e3450a0f48-metrics-certs\") pod \"router-default-5444994796-b7psd\" (UID: \"5bd1b4d8-80f4-4044-891b-a5e3450a0f48\") " pod="openshift-ingress/router-default-5444994796-b7psd" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.674971 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/809592c1-c9ad-49f0-90a6-cea3bbebf136-signing-cabundle\") pod \"service-ca-9c57cc56f-h57x4\" (UID: \"809592c1-c9ad-49f0-90a6-cea3bbebf136\") " pod="openshift-service-ca/service-ca-9c57cc56f-h57x4" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.674992 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08ecaa84-5c71-4570-8f7f-d753d2eeb9ab-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-j84fb\" (UID: \"08ecaa84-5c71-4570-8f7f-d753d2eeb9ab\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j84fb" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675014 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdbbn\" (UniqueName: \"kubernetes.io/projected/d34aa26a-9b3b-463d-bea6-be2d12b5854c-kube-api-access-mdbbn\") pod \"csi-hostpathplugin-dqt5h\" (UID: \"d34aa26a-9b3b-463d-bea6-be2d12b5854c\") " pod="hostpath-provisioner/csi-hostpathplugin-dqt5h" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675033 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2ece23bb-e939-4912-99fc-ea54a7c7336e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-ll6fj\" (UID: \"2ece23bb-e939-4912-99fc-ea54a7c7336e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ll6fj" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675047 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a3f70bd4-a15e-44dd-a610-ee085e108403-metrics-tls\") pod \"dns-default-pxwbj\" (UID: \"a3f70bd4-a15e-44dd-a610-ee085e108403\") " pod="openshift-dns/dns-default-pxwbj" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675078 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/110cad9b-2348-4b66-a432-4461f3bd77c6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wknkb\" (UID: \"110cad9b-2348-4b66-a432-4461f3bd77c6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wknkb" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675094 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7be72ca7-b8da-4034-b6e0-16218c2e793e-config\") pod \"service-ca-operator-777779d784-jrx24\" (UID: \"7be72ca7-b8da-4034-b6e0-16218c2e793e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jrx24" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675110 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/057b7c44-3b11-4a03-8325-0b3819b55f6f-webhook-cert\") pod \"packageserver-d55dfcdfc-6n6fc\" (UID: \"057b7c44-3b11-4a03-8325-0b3819b55f6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6n6fc" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675129 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw89n\" (UniqueName: \"kubernetes.io/projected/fca937fd-eef1-4f91-b825-18d5429526a9-kube-api-access-cw89n\") pod \"collect-profiles-29520810-vq6f4\" (UID: \"fca937fd-eef1-4f91-b825-18d5429526a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-vq6f4" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675146 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt5cx\" (UniqueName: \"kubernetes.io/projected/b26542fa-2c38-47d7-984b-e51679e600c4-kube-api-access-gt5cx\") pod \"catalog-operator-68c6474976-2hqkd\" (UID: \"b26542fa-2c38-47d7-984b-e51679e600c4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2hqkd" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675171 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d34aa26a-9b3b-463d-bea6-be2d12b5854c-csi-data-dir\") pod \"csi-hostpathplugin-dqt5h\" (UID: \"d34aa26a-9b3b-463d-bea6-be2d12b5854c\") " pod="hostpath-provisioner/csi-hostpathplugin-dqt5h" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675188 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgjtd\" (UniqueName: \"kubernetes.io/projected/7be72ca7-b8da-4034-b6e0-16218c2e793e-kube-api-access-bgjtd\") pod \"service-ca-operator-777779d784-jrx24\" (UID: \"7be72ca7-b8da-4034-b6e0-16218c2e793e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jrx24" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675206 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/84482702-f4be-41ce-98c6-eb5161d23ba0-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-b5crl\" (UID: \"84482702-f4be-41ce-98c6-eb5161d23ba0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b5crl" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675299 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/110cad9b-2348-4b66-a432-4461f3bd77c6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wknkb\" (UID: \"110cad9b-2348-4b66-a432-4461f3bd77c6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wknkb" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675319 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qg44q\" (UniqueName: \"kubernetes.io/projected/a3f70bd4-a15e-44dd-a610-ee085e108403-kube-api-access-qg44q\") pod \"dns-default-pxwbj\" (UID: \"a3f70bd4-a15e-44dd-a610-ee085e108403\") " pod="openshift-dns/dns-default-pxwbj" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675338 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c03df561-6085-44d5-a33c-3c01a749858e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-v7f6l\" (UID: \"c03df561-6085-44d5-a33c-3c01a749858e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v7f6l" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675355 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/db15826a-b0d8-4fb5-9a69-35ae6888b029-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tkjwb\" (UID: \"db15826a-b0d8-4fb5-9a69-35ae6888b029\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tkjwb" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675371 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/13167121-9190-4ef3-b635-d528457b4c53-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-nwjmg\" (UID: \"13167121-9190-4ef3-b635-d528457b4c53\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nwjmg" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675389 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khs8b\" (UniqueName: \"kubernetes.io/projected/13167121-9190-4ef3-b635-d528457b4c53-kube-api-access-khs8b\") pod \"multus-admission-controller-857f4d67dd-nwjmg\" (UID: \"13167121-9190-4ef3-b635-d528457b4c53\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nwjmg" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675407 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9876\" (UniqueName: \"kubernetes.io/projected/8dba5fdd-d62c-41c5-9550-d98118b3b1a1-kube-api-access-j9876\") pod \"dns-operator-744455d44c-w525k\" (UID: \"8dba5fdd-d62c-41c5-9550-d98118b3b1a1\") " pod="openshift-dns-operator/dns-operator-744455d44c-w525k" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675424 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c03df561-6085-44d5-a33c-3c01a749858e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-v7f6l\" (UID: \"c03df561-6085-44d5-a33c-3c01a749858e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v7f6l" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675440 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vb8x\" (UniqueName: \"kubernetes.io/projected/84482702-f4be-41ce-98c6-eb5161d23ba0-kube-api-access-4vb8x\") pod \"package-server-manager-789f6589d5-b5crl\" (UID: \"84482702-f4be-41ce-98c6-eb5161d23ba0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b5crl" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675475 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/89281b9f-7c51-470c-aa86-bdfd398f2a2a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-klckm\" (UID: \"89281b9f-7c51-470c-aa86-bdfd398f2a2a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-klckm" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675494 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fca937fd-eef1-4f91-b825-18d5429526a9-config-volume\") pod \"collect-profiles-29520810-vq6f4\" (UID: \"fca937fd-eef1-4f91-b825-18d5429526a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-vq6f4" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675515 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/88330cc0-3dd3-4ff7-8661-6a79d3e1667a-images\") pod \"machine-config-operator-74547568cd-gjtcl\" (UID: \"88330cc0-3dd3-4ff7-8661-6a79d3e1667a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gjtcl" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675532 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2ece23bb-e939-4912-99fc-ea54a7c7336e-trusted-ca\") pod \"ingress-operator-5b745b69d9-ll6fj\" (UID: \"2ece23bb-e939-4912-99fc-ea54a7c7336e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ll6fj" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675547 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b26542fa-2c38-47d7-984b-e51679e600c4-srv-cert\") pod \"catalog-operator-68c6474976-2hqkd\" (UID: \"b26542fa-2c38-47d7-984b-e51679e600c4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2hqkd" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675563 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c03df561-6085-44d5-a33c-3c01a749858e-config\") pod \"kube-controller-manager-operator-78b949d7b-v7f6l\" (UID: \"c03df561-6085-44d5-a33c-3c01a749858e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v7f6l" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675585 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/db15826a-b0d8-4fb5-9a69-35ae6888b029-proxy-tls\") pod \"machine-config-controller-84d6567774-tkjwb\" (UID: \"db15826a-b0d8-4fb5-9a69-35ae6888b029\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tkjwb" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675610 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9c80fa26-a106-41fa-b66d-53954e1b233b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-g8zmc\" (UID: \"9c80fa26-a106-41fa-b66d-53954e1b233b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g8zmc" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675627 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/88330cc0-3dd3-4ff7-8661-6a79d3e1667a-proxy-tls\") pod \"machine-config-operator-74547568cd-gjtcl\" (UID: \"88330cc0-3dd3-4ff7-8661-6a79d3e1667a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gjtcl" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675641 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d34aa26a-9b3b-463d-bea6-be2d12b5854c-socket-dir\") pod \"csi-hostpathplugin-dqt5h\" (UID: \"d34aa26a-9b3b-463d-bea6-be2d12b5854c\") " pod="hostpath-provisioner/csi-hostpathplugin-dqt5h" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675657 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/88330cc0-3dd3-4ff7-8661-6a79d3e1667a-auth-proxy-config\") pod \"machine-config-operator-74547568cd-gjtcl\" (UID: \"88330cc0-3dd3-4ff7-8661-6a79d3e1667a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gjtcl" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675675 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr5km\" (UniqueName: \"kubernetes.io/projected/89281b9f-7c51-470c-aa86-bdfd398f2a2a-kube-api-access-hr5km\") pod \"control-plane-machine-set-operator-78cbb6b69f-klckm\" (UID: \"89281b9f-7c51-470c-aa86-bdfd398f2a2a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-klckm" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675695 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/809592c1-c9ad-49f0-90a6-cea3bbebf136-signing-key\") pod \"service-ca-9c57cc56f-h57x4\" (UID: \"809592c1-c9ad-49f0-90a6-cea3bbebf136\") " pod="openshift-service-ca/service-ca-9c57cc56f-h57x4" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675727 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08ecaa84-5c71-4570-8f7f-d753d2eeb9ab-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-j84fb\" (UID: \"08ecaa84-5c71-4570-8f7f-d753d2eeb9ab\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j84fb" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675750 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d34aa26a-9b3b-463d-bea6-be2d12b5854c-registration-dir\") pod \"csi-hostpathplugin-dqt5h\" (UID: \"d34aa26a-9b3b-463d-bea6-be2d12b5854c\") " pod="hostpath-provisioner/csi-hostpathplugin-dqt5h" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675772 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fca937fd-eef1-4f91-b825-18d5429526a9-secret-volume\") pod \"collect-profiles-29520810-vq6f4\" (UID: \"fca937fd-eef1-4f91-b825-18d5429526a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-vq6f4" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675794 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4l74\" (UniqueName: \"kubernetes.io/projected/7147a2f9-6f8c-4fa5-b6da-a6a67a53e231-kube-api-access-b4l74\") pod \"marketplace-operator-79b997595-kc7dg\" (UID: \"7147a2f9-6f8c-4fa5-b6da-a6a67a53e231\") " pod="openshift-marketplace/marketplace-operator-79b997595-kc7dg" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675817 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/057b7c44-3b11-4a03-8325-0b3819b55f6f-tmpfs\") pod \"packageserver-d55dfcdfc-6n6fc\" (UID: \"057b7c44-3b11-4a03-8325-0b3819b55f6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6n6fc" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675846 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5bd1b4d8-80f4-4044-891b-a5e3450a0f48-default-certificate\") pod \"router-default-5444994796-b7psd\" (UID: \"5bd1b4d8-80f4-4044-891b-a5e3450a0f48\") " pod="openshift-ingress/router-default-5444994796-b7psd" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675884 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft4vf\" (UniqueName: \"kubernetes.io/projected/9cf0c1ed-445f-4f9c-a8c3-c903d559de4d-kube-api-access-ft4vf\") pod \"migrator-59844c95c7-46895\" (UID: \"9cf0c1ed-445f-4f9c-a8c3-c903d559de4d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-46895" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675903 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bd1b4d8-80f4-4044-891b-a5e3450a0f48-service-ca-bundle\") pod \"router-default-5444994796-b7psd\" (UID: \"5bd1b4d8-80f4-4044-891b-a5e3450a0f48\") " pod="openshift-ingress/router-default-5444994796-b7psd" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675919 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b26542fa-2c38-47d7-984b-e51679e600c4-profile-collector-cert\") pod \"catalog-operator-68c6474976-2hqkd\" (UID: \"b26542fa-2c38-47d7-984b-e51679e600c4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2hqkd" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675936 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7147a2f9-6f8c-4fa5-b6da-a6a67a53e231-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kc7dg\" (UID: \"7147a2f9-6f8c-4fa5-b6da-a6a67a53e231\") " pod="openshift-marketplace/marketplace-operator-79b997595-kc7dg" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675953 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vswk7\" (UniqueName: \"kubernetes.io/projected/db15826a-b0d8-4fb5-9a69-35ae6888b029-kube-api-access-vswk7\") pod \"machine-config-controller-84d6567774-tkjwb\" (UID: \"db15826a-b0d8-4fb5-9a69-35ae6888b029\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tkjwb" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675977 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtcl6\" (UniqueName: \"kubernetes.io/projected/809592c1-c9ad-49f0-90a6-cea3bbebf136-kube-api-access-vtcl6\") pod \"service-ca-9c57cc56f-h57x4\" (UID: \"809592c1-c9ad-49f0-90a6-cea3bbebf136\") " pod="openshift-service-ca/service-ca-9c57cc56f-h57x4" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.675993 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7stz\" (UniqueName: \"kubernetes.io/projected/a3c1e652-a321-42b1-b658-624e89a01eb3-kube-api-access-x7stz\") pod \"ingress-canary-6c7g6\" (UID: \"a3c1e652-a321-42b1-b658-624e89a01eb3\") " pod="openshift-ingress-canary/ingress-canary-6c7g6" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.676010 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz5pd\" (UniqueName: \"kubernetes.io/projected/5bd1b4d8-80f4-4044-891b-a5e3450a0f48-kube-api-access-dz5pd\") pod \"router-default-5444994796-b7psd\" (UID: \"5bd1b4d8-80f4-4044-891b-a5e3450a0f48\") " pod="openshift-ingress/router-default-5444994796-b7psd" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.676025 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7147a2f9-6f8c-4fa5-b6da-a6a67a53e231-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kc7dg\" (UID: \"7147a2f9-6f8c-4fa5-b6da-a6a67a53e231\") " pod="openshift-marketplace/marketplace-operator-79b997595-kc7dg" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.676043 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/939c437d-1347-489e-bb4a-1b783a62d707-node-bootstrap-token\") pod \"machine-config-server-qfjbm\" (UID: \"939c437d-1347-489e-bb4a-1b783a62d707\") " pod="openshift-machine-config-operator/machine-config-server-qfjbm" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.676067 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqqc7\" (UniqueName: \"kubernetes.io/projected/057b7c44-3b11-4a03-8325-0b3819b55f6f-kube-api-access-dqqc7\") pod \"packageserver-d55dfcdfc-6n6fc\" (UID: \"057b7c44-3b11-4a03-8325-0b3819b55f6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6n6fc" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.676082 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2ece23bb-e939-4912-99fc-ea54a7c7336e-metrics-tls\") pod \"ingress-operator-5b745b69d9-ll6fj\" (UID: \"2ece23bb-e939-4912-99fc-ea54a7c7336e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ll6fj" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.676099 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/939c437d-1347-489e-bb4a-1b783a62d707-certs\") pod \"machine-config-server-qfjbm\" (UID: \"939c437d-1347-489e-bb4a-1b783a62d707\") " pod="openshift-machine-config-operator/machine-config-server-qfjbm" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.676113 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/057b7c44-3b11-4a03-8325-0b3819b55f6f-apiservice-cert\") pod \"packageserver-d55dfcdfc-6n6fc\" (UID: \"057b7c44-3b11-4a03-8325-0b3819b55f6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6n6fc" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.676132 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg95r\" (UniqueName: \"kubernetes.io/projected/88330cc0-3dd3-4ff7-8661-6a79d3e1667a-kube-api-access-vg95r\") pod \"machine-config-operator-74547568cd-gjtcl\" (UID: \"88330cc0-3dd3-4ff7-8661-6a79d3e1667a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gjtcl" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.676147 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d34aa26a-9b3b-463d-bea6-be2d12b5854c-mountpoint-dir\") pod \"csi-hostpathplugin-dqt5h\" (UID: \"d34aa26a-9b3b-463d-bea6-be2d12b5854c\") " pod="hostpath-provisioner/csi-hostpathplugin-dqt5h" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.676240 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d34aa26a-9b3b-463d-bea6-be2d12b5854c-mountpoint-dir\") pod \"csi-hostpathplugin-dqt5h\" (UID: \"d34aa26a-9b3b-463d-bea6-be2d12b5854c\") " pod="hostpath-provisioner/csi-hostpathplugin-dqt5h" Feb 16 13:34:13 crc kubenswrapper[4812]: E0216 13:34:13.676317 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:14.176300512 +0000 UTC m=+143.240631213 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.680671 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/89281b9f-7c51-470c-aa86-bdfd398f2a2a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-klckm\" (UID: \"89281b9f-7c51-470c-aa86-bdfd398f2a2a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-klckm" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.681189 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/110cad9b-2348-4b66-a432-4461f3bd77c6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wknkb\" (UID: \"110cad9b-2348-4b66-a432-4461f3bd77c6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wknkb" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.681488 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7be72ca7-b8da-4034-b6e0-16218c2e793e-config\") pod \"service-ca-operator-777779d784-jrx24\" (UID: \"7be72ca7-b8da-4034-b6e0-16218c2e793e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jrx24" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.681627 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/057b7c44-3b11-4a03-8325-0b3819b55f6f-tmpfs\") pod \"packageserver-d55dfcdfc-6n6fc\" (UID: \"057b7c44-3b11-4a03-8325-0b3819b55f6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6n6fc" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.682578 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fca937fd-eef1-4f91-b825-18d5429526a9-config-volume\") pod \"collect-profiles-29520810-vq6f4\" (UID: \"fca937fd-eef1-4f91-b825-18d5429526a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-vq6f4" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.682989 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7147a2f9-6f8c-4fa5-b6da-a6a67a53e231-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kc7dg\" (UID: \"7147a2f9-6f8c-4fa5-b6da-a6a67a53e231\") " pod="openshift-marketplace/marketplace-operator-79b997595-kc7dg" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.683282 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/88330cc0-3dd3-4ff7-8661-6a79d3e1667a-images\") pod \"machine-config-operator-74547568cd-gjtcl\" (UID: \"88330cc0-3dd3-4ff7-8661-6a79d3e1667a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gjtcl" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.683744 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5bd1b4d8-80f4-4044-891b-a5e3450a0f48-stats-auth\") pod \"router-default-5444994796-b7psd\" (UID: \"5bd1b4d8-80f4-4044-891b-a5e3450a0f48\") " pod="openshift-ingress/router-default-5444994796-b7psd" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.683871 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8258\" (UniqueName: \"kubernetes.io/projected/f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7-kube-api-access-p8258\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.685323 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/db15826a-b0d8-4fb5-9a69-35ae6888b029-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tkjwb\" (UID: \"db15826a-b0d8-4fb5-9a69-35ae6888b029\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tkjwb" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.686367 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bd1b4d8-80f4-4044-891b-a5e3450a0f48-service-ca-bundle\") pod \"router-default-5444994796-b7psd\" (UID: \"5bd1b4d8-80f4-4044-891b-a5e3450a0f48\") " pod="openshift-ingress/router-default-5444994796-b7psd" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.687848 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b26542fa-2c38-47d7-984b-e51679e600c4-profile-collector-cert\") pod \"catalog-operator-68c6474976-2hqkd\" (UID: \"b26542fa-2c38-47d7-984b-e51679e600c4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2hqkd" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.688532 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5bd1b4d8-80f4-4044-891b-a5e3450a0f48-default-certificate\") pod \"router-default-5444994796-b7psd\" (UID: \"5bd1b4d8-80f4-4044-891b-a5e3450a0f48\") " pod="openshift-ingress/router-default-5444994796-b7psd" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.688661 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d34aa26a-9b3b-463d-bea6-be2d12b5854c-csi-data-dir\") pod \"csi-hostpathplugin-dqt5h\" (UID: \"d34aa26a-9b3b-463d-bea6-be2d12b5854c\") " pod="hostpath-provisioner/csi-hostpathplugin-dqt5h" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.689842 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-tpgqc"] Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.691964 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7147a2f9-6f8c-4fa5-b6da-a6a67a53e231-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kc7dg\" (UID: \"7147a2f9-6f8c-4fa5-b6da-a6a67a53e231\") " pod="openshift-marketplace/marketplace-operator-79b997595-kc7dg" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.693958 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/939c437d-1347-489e-bb4a-1b783a62d707-node-bootstrap-token\") pod \"machine-config-server-qfjbm\" (UID: \"939c437d-1347-489e-bb4a-1b783a62d707\") " pod="openshift-machine-config-operator/machine-config-server-qfjbm" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.694229 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9c80fa26-a106-41fa-b66d-53954e1b233b-srv-cert\") pod \"olm-operator-6b444d44fb-g8zmc\" (UID: \"9c80fa26-a106-41fa-b66d-53954e1b233b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g8zmc" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.695326 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/809592c1-c9ad-49f0-90a6-cea3bbebf136-signing-cabundle\") pod \"service-ca-9c57cc56f-h57x4\" (UID: \"809592c1-c9ad-49f0-90a6-cea3bbebf136\") " pod="openshift-service-ca/service-ca-9c57cc56f-h57x4" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.696129 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3f70bd4-a15e-44dd-a610-ee085e108403-config-volume\") pod \"dns-default-pxwbj\" (UID: \"a3f70bd4-a15e-44dd-a610-ee085e108403\") " pod="openshift-dns/dns-default-pxwbj" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.696428 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d34aa26a-9b3b-463d-bea6-be2d12b5854c-plugins-dir\") pod \"csi-hostpathplugin-dqt5h\" (UID: \"d34aa26a-9b3b-463d-bea6-be2d12b5854c\") " pod="hostpath-provisioner/csi-hostpathplugin-dqt5h" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.698418 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/88330cc0-3dd3-4ff7-8661-6a79d3e1667a-auth-proxy-config\") pod \"machine-config-operator-74547568cd-gjtcl\" (UID: \"88330cc0-3dd3-4ff7-8661-6a79d3e1667a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gjtcl" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.699581 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c03df561-6085-44d5-a33c-3c01a749858e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-v7f6l\" (UID: \"c03df561-6085-44d5-a33c-3c01a749858e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v7f6l" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.700800 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/13167121-9190-4ef3-b635-d528457b4c53-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-nwjmg\" (UID: \"13167121-9190-4ef3-b635-d528457b4c53\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nwjmg" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.701362 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a3c1e652-a321-42b1-b658-624e89a01eb3-cert\") pod \"ingress-canary-6c7g6\" (UID: \"a3c1e652-a321-42b1-b658-624e89a01eb3\") " pod="openshift-ingress-canary/ingress-canary-6c7g6" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.701876 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d34aa26a-9b3b-463d-bea6-be2d12b5854c-socket-dir\") pod \"csi-hostpathplugin-dqt5h\" (UID: \"d34aa26a-9b3b-463d-bea6-be2d12b5854c\") " pod="hostpath-provisioner/csi-hostpathplugin-dqt5h" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.702013 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d34aa26a-9b3b-463d-bea6-be2d12b5854c-registration-dir\") pod \"csi-hostpathplugin-dqt5h\" (UID: \"d34aa26a-9b3b-463d-bea6-be2d12b5854c\") " pod="hostpath-provisioner/csi-hostpathplugin-dqt5h" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.702411 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2ece23bb-e939-4912-99fc-ea54a7c7336e-metrics-tls\") pod \"ingress-operator-5b745b69d9-ll6fj\" (UID: \"2ece23bb-e939-4912-99fc-ea54a7c7336e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ll6fj" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.702854 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08ecaa84-5c71-4570-8f7f-d753d2eeb9ab-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-j84fb\" (UID: \"08ecaa84-5c71-4570-8f7f-d753d2eeb9ab\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j84fb" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.709606 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a3f70bd4-a15e-44dd-a610-ee085e108403-metrics-tls\") pod \"dns-default-pxwbj\" (UID: \"a3f70bd4-a15e-44dd-a610-ee085e108403\") " pod="openshift-dns/dns-default-pxwbj" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.709825 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5bd1b4d8-80f4-4044-891b-a5e3450a0f48-metrics-certs\") pod \"router-default-5444994796-b7psd\" (UID: \"5bd1b4d8-80f4-4044-891b-a5e3450a0f48\") " pod="openshift-ingress/router-default-5444994796-b7psd" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.709818 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/939c437d-1347-489e-bb4a-1b783a62d707-certs\") pod \"machine-config-server-qfjbm\" (UID: \"939c437d-1347-489e-bb4a-1b783a62d707\") " pod="openshift-machine-config-operator/machine-config-server-qfjbm" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.710154 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/057b7c44-3b11-4a03-8325-0b3819b55f6f-apiservice-cert\") pod \"packageserver-d55dfcdfc-6n6fc\" (UID: \"057b7c44-3b11-4a03-8325-0b3819b55f6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6n6fc" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.710405 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2ece23bb-e939-4912-99fc-ea54a7c7336e-trusted-ca\") pod \"ingress-operator-5b745b69d9-ll6fj\" (UID: \"2ece23bb-e939-4912-99fc-ea54a7c7336e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ll6fj" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.711933 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/84482702-f4be-41ce-98c6-eb5161d23ba0-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-b5crl\" (UID: \"84482702-f4be-41ce-98c6-eb5161d23ba0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b5crl" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.713148 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-nplvk"] Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.713509 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7be72ca7-b8da-4034-b6e0-16218c2e793e-serving-cert\") pod \"service-ca-operator-777779d784-jrx24\" (UID: \"7be72ca7-b8da-4034-b6e0-16218c2e793e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jrx24" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.714071 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/88330cc0-3dd3-4ff7-8661-6a79d3e1667a-proxy-tls\") pod \"machine-config-operator-74547568cd-gjtcl\" (UID: \"88330cc0-3dd3-4ff7-8661-6a79d3e1667a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gjtcl" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.709970 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/057b7c44-3b11-4a03-8325-0b3819b55f6f-webhook-cert\") pod \"packageserver-d55dfcdfc-6n6fc\" (UID: \"057b7c44-3b11-4a03-8325-0b3819b55f6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6n6fc" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.715999 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnklk\" (UniqueName: \"kubernetes.io/projected/9d7cab04-b239-4a4c-b3da-ba280200cd57-kube-api-access-fnklk\") pod \"openshift-apiserver-operator-796bbdcf4f-8ngmk\" (UID: \"9d7cab04-b239-4a4c-b3da-ba280200cd57\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8ngmk" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.716117 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9c80fa26-a106-41fa-b66d-53954e1b233b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-g8zmc\" (UID: \"9c80fa26-a106-41fa-b66d-53954e1b233b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g8zmc" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.716333 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b26542fa-2c38-47d7-984b-e51679e600c4-srv-cert\") pod \"catalog-operator-68c6474976-2hqkd\" (UID: \"b26542fa-2c38-47d7-984b-e51679e600c4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2hqkd" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.718589 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/110cad9b-2348-4b66-a432-4461f3bd77c6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wknkb\" (UID: \"110cad9b-2348-4b66-a432-4461f3bd77c6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wknkb" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.718770 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8dba5fdd-d62c-41c5-9550-d98118b3b1a1-metrics-tls\") pod \"dns-operator-744455d44c-w525k\" (UID: \"8dba5fdd-d62c-41c5-9550-d98118b3b1a1\") " pod="openshift-dns-operator/dns-operator-744455d44c-w525k" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.719643 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fca937fd-eef1-4f91-b825-18d5429526a9-secret-volume\") pod \"collect-profiles-29520810-vq6f4\" (UID: \"fca937fd-eef1-4f91-b825-18d5429526a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-vq6f4" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.721377 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/809592c1-c9ad-49f0-90a6-cea3bbebf136-signing-key\") pod \"service-ca-9c57cc56f-h57x4\" (UID: \"809592c1-c9ad-49f0-90a6-cea3bbebf136\") " pod="openshift-service-ca/service-ca-9c57cc56f-h57x4" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.725517 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/db15826a-b0d8-4fb5-9a69-35ae6888b029-proxy-tls\") pod \"machine-config-controller-84d6567774-tkjwb\" (UID: \"db15826a-b0d8-4fb5-9a69-35ae6888b029\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tkjwb" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.709788 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c03df561-6085-44d5-a33c-3c01a749858e-config\") pod \"kube-controller-manager-operator-78b949d7b-v7f6l\" (UID: \"c03df561-6085-44d5-a33c-3c01a749858e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v7f6l" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.743203 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08ecaa84-5c71-4570-8f7f-d753d2eeb9ab-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-j84fb\" (UID: \"08ecaa84-5c71-4570-8f7f-d753d2eeb9ab\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j84fb" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.754373 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kp678\" (UniqueName: \"kubernetes.io/projected/9c80fa26-a106-41fa-b66d-53954e1b233b-kube-api-access-kp678\") pod \"olm-operator-6b444d44fb-g8zmc\" (UID: \"9c80fa26-a106-41fa-b66d-53954e1b233b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g8zmc" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.766431 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qg44q\" (UniqueName: \"kubernetes.io/projected/a3f70bd4-a15e-44dd-a610-ee085e108403-kube-api-access-qg44q\") pod \"dns-default-pxwbj\" (UID: \"a3f70bd4-a15e-44dd-a610-ee085e108403\") " pod="openshift-dns/dns-default-pxwbj" Feb 16 13:34:13 crc kubenswrapper[4812]: W0216 13:34:13.769974 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc221ee5f_91c7_4ca7_9567_55cd7bd72beb.slice/crio-5cc88ce29e1f85ba314b9b1a794f2573b56866a5d50fc5088fbdbfd7af73a0dc WatchSource:0}: Error finding container 5cc88ce29e1f85ba314b9b1a794f2573b56866a5d50fc5088fbdbfd7af73a0dc: Status 404 returned error can't find the container with id 5cc88ce29e1f85ba314b9b1a794f2573b56866a5d50fc5088fbdbfd7af73a0dc Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.777625 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:13 crc kubenswrapper[4812]: E0216 13:34:13.778080 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:14.278068112 +0000 UTC m=+143.342398813 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.778554 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-4cx9t" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.779909 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7stz\" (UniqueName: \"kubernetes.io/projected/a3c1e652-a321-42b1-b658-624e89a01eb3-kube-api-access-x7stz\") pod \"ingress-canary-6c7g6\" (UID: \"a3c1e652-a321-42b1-b658-624e89a01eb3\") " pod="openshift-ingress-canary/ingress-canary-6c7g6" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.808956 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5sss2" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.812023 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqqc7\" (UniqueName: \"kubernetes.io/projected/057b7c44-3b11-4a03-8325-0b3819b55f6f-kube-api-access-dqqc7\") pod \"packageserver-d55dfcdfc-6n6fc\" (UID: \"057b7c44-3b11-4a03-8325-0b3819b55f6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6n6fc" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.823462 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz5pd\" (UniqueName: \"kubernetes.io/projected/5bd1b4d8-80f4-4044-891b-a5e3450a0f48-kube-api-access-dz5pd\") pod \"router-default-5444994796-b7psd\" (UID: \"5bd1b4d8-80f4-4044-891b-a5e3450a0f48\") " pod="openshift-ingress/router-default-5444994796-b7psd" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.846485 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khs8b\" (UniqueName: \"kubernetes.io/projected/13167121-9190-4ef3-b635-d528457b4c53-kube-api-access-khs8b\") pod \"multus-admission-controller-857f4d67dd-nwjmg\" (UID: \"13167121-9190-4ef3-b635-d528457b4c53\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nwjmg" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.860689 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gb9bh"] Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.865589 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tnqj2"] Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.875455 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft4vf\" (UniqueName: \"kubernetes.io/projected/9cf0c1ed-445f-4f9c-a8c3-c903d559de4d-kube-api-access-ft4vf\") pod \"migrator-59844c95c7-46895\" (UID: \"9cf0c1ed-445f-4f9c-a8c3-c903d559de4d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-46895" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.879809 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:13 crc kubenswrapper[4812]: E0216 13:34:13.880601 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:14.380578054 +0000 UTC m=+143.444908755 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.884011 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-46895" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.886646 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c03df561-6085-44d5-a33c-3c01a749858e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-v7f6l\" (UID: \"c03df561-6085-44d5-a33c-3c01a749858e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v7f6l" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.913229 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6n6fc" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.914794 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9876\" (UniqueName: \"kubernetes.io/projected/8dba5fdd-d62c-41c5-9550-d98118b3b1a1-kube-api-access-j9876\") pod \"dns-operator-744455d44c-w525k\" (UID: \"8dba5fdd-d62c-41c5-9550-d98118b3b1a1\") " pod="openshift-dns-operator/dns-operator-744455d44c-w525k" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.915033 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-b7psd" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.926506 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-942n4" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.926969 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw89n\" (UniqueName: \"kubernetes.io/projected/fca937fd-eef1-4f91-b825-18d5429526a9-kube-api-access-cw89n\") pod \"collect-profiles-29520810-vq6f4\" (UID: \"fca937fd-eef1-4f91-b825-18d5429526a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-vq6f4" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.942387 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8ngmk" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.946348 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-nwjmg" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.947663 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt5cx\" (UniqueName: \"kubernetes.io/projected/b26542fa-2c38-47d7-984b-e51679e600c4-kube-api-access-gt5cx\") pod \"catalog-operator-68c6474976-2hqkd\" (UID: \"b26542fa-2c38-47d7-984b-e51679e600c4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2hqkd" Feb 16 13:34:13 crc kubenswrapper[4812]: W0216 13:34:13.959425 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f368631_7f8d_4004_a36c_38cb52391cb4.slice/crio-8664cde524e4be6cd4d7624cf42ce8cd80cfb7912ff561a18cc847cf755a0d3d WatchSource:0}: Error finding container 8664cde524e4be6cd4d7624cf42ce8cd80cfb7912ff561a18cc847cf755a0d3d: Status 404 returned error can't find the container with id 8664cde524e4be6cd4d7624cf42ce8cd80cfb7912ff561a18cc847cf755a0d3d Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.962428 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vb8x\" (UniqueName: \"kubernetes.io/projected/84482702-f4be-41ce-98c6-eb5161d23ba0-kube-api-access-4vb8x\") pod \"package-server-manager-789f6589d5-b5crl\" (UID: \"84482702-f4be-41ce-98c6-eb5161d23ba0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b5crl" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.980007 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g8zmc" Feb 16 13:34:13 crc kubenswrapper[4812]: I0216 13:34:13.981382 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:13 crc kubenswrapper[4812]: E0216 13:34:13.981709 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:14.481695394 +0000 UTC m=+143.546026105 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:13.986168 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vswk7\" (UniqueName: \"kubernetes.io/projected/db15826a-b0d8-4fb5-9a69-35ae6888b029-kube-api-access-vswk7\") pod \"machine-config-controller-84d6567774-tkjwb\" (UID: \"db15826a-b0d8-4fb5-9a69-35ae6888b029\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tkjwb" Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:13.986319 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-6c7g6" Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.002081 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-pxwbj" Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.004592 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtcl6\" (UniqueName: \"kubernetes.io/projected/809592c1-c9ad-49f0-90a6-cea3bbebf136-kube-api-access-vtcl6\") pod \"service-ca-9c57cc56f-h57x4\" (UID: \"809592c1-c9ad-49f0-90a6-cea3bbebf136\") " pod="openshift-service-ca/service-ca-9c57cc56f-h57x4" Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.021105 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdbbn\" (UniqueName: \"kubernetes.io/projected/d34aa26a-9b3b-463d-bea6-be2d12b5854c-kube-api-access-mdbbn\") pod \"csi-hostpathplugin-dqt5h\" (UID: \"d34aa26a-9b3b-463d-bea6-be2d12b5854c\") " pod="hostpath-provisioner/csi-hostpathplugin-dqt5h" Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.051133 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v2s8n"] Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.057160 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2ece23bb-e939-4912-99fc-ea54a7c7336e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-ll6fj\" (UID: \"2ece23bb-e939-4912-99fc-ea54a7c7336e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ll6fj" Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.064927 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg95r\" (UniqueName: \"kubernetes.io/projected/88330cc0-3dd3-4ff7-8661-6a79d3e1667a-kube-api-access-vg95r\") pod \"machine-config-operator-74547568cd-gjtcl\" (UID: \"88330cc0-3dd3-4ff7-8661-6a79d3e1667a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gjtcl" Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.072122 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-dqt5h" Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.080052 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgjtd\" (UniqueName: \"kubernetes.io/projected/7be72ca7-b8da-4034-b6e0-16218c2e793e-kube-api-access-bgjtd\") pod \"service-ca-operator-777779d784-jrx24\" (UID: \"7be72ca7-b8da-4034-b6e0-16218c2e793e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jrx24" Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.083732 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:14 crc kubenswrapper[4812]: E0216 13:34:14.084028 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:14.58397741 +0000 UTC m=+143.648308111 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.084126 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:14 crc kubenswrapper[4812]: E0216 13:34:14.084658 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:14.58464336 +0000 UTC m=+143.648974061 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.106853 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb67b\" (UniqueName: \"kubernetes.io/projected/08ecaa84-5c71-4570-8f7f-d753d2eeb9ab-kube-api-access-wb67b\") pod \"kube-storage-version-migrator-operator-b67b599dd-j84fb\" (UID: \"08ecaa84-5c71-4570-8f7f-d753d2eeb9ab\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j84fb" Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.135980 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-w525k" Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.143111 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2hqkd" Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.157738 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr5km\" (UniqueName: \"kubernetes.io/projected/89281b9f-7c51-470c-aa86-bdfd398f2a2a-kube-api-access-hr5km\") pod \"control-plane-machine-set-operator-78cbb6b69f-klckm\" (UID: \"89281b9f-7c51-470c-aa86-bdfd398f2a2a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-klckm" Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.171210 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-klckm" Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.173651 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmc92\" (UniqueName: \"kubernetes.io/projected/2ece23bb-e939-4912-99fc-ea54a7c7336e-kube-api-access-cmc92\") pod \"ingress-operator-5b745b69d9-ll6fj\" (UID: \"2ece23bb-e939-4912-99fc-ea54a7c7336e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ll6fj" Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.178824 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4l74\" (UniqueName: \"kubernetes.io/projected/7147a2f9-6f8c-4fa5-b6da-a6a67a53e231-kube-api-access-b4l74\") pod \"marketplace-operator-79b997595-kc7dg\" (UID: \"7147a2f9-6f8c-4fa5-b6da-a6a67a53e231\") " pod="openshift-marketplace/marketplace-operator-79b997595-kc7dg" Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.184161 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v7f6l" Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.187344 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:14 crc kubenswrapper[4812]: E0216 13:34:14.189496 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:14.689473845 +0000 UTC m=+143.753804556 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.189587 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-vq6f4" Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.205236 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j84fb" Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.209041 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-plrxx"] Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.216314 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/110cad9b-2348-4b66-a432-4461f3bd77c6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wknkb\" (UID: \"110cad9b-2348-4b66-a432-4461f3bd77c6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wknkb" Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.223731 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-wc6pn"] Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.227552 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tkjwb" Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.231364 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8mcbf"] Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.236310 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf9sx\" (UniqueName: \"kubernetes.io/projected/939c437d-1347-489e-bb4a-1b783a62d707-kube-api-access-tf9sx\") pod \"machine-config-server-qfjbm\" (UID: \"939c437d-1347-489e-bb4a-1b783a62d707\") " pod="openshift-machine-config-operator/machine-config-server-qfjbm" Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.236708 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h69cg"] Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.236728 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gjtcl" Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.254862 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b5crl" Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.268668 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4cx9t"] Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.268891 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-h57x4" Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.270334 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kc7dg" Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.290782 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:14 crc kubenswrapper[4812]: E0216 13:34:14.291295 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:14.791282336 +0000 UTC m=+143.855613037 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.295248 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jrx24" Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.307940 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-7wvg2"] Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.326243 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-qfjbm" Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.393615 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:14 crc kubenswrapper[4812]: E0216 13:34:14.394066 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:14.894047537 +0000 UTC m=+143.958378248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.404765 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5sss2"] Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.453294 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ll6fj" Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.459737 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wknkb" Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.499738 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:14 crc kubenswrapper[4812]: E0216 13:34:14.503082 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:15.003053561 +0000 UTC m=+144.067384262 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.550381 4812 patch_prober.go:28] interesting pod/machine-config-daemon-c6mn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.550421 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.551974 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-46895"] Feb 16 13:34:14 crc kubenswrapper[4812]: W0216 13:34:14.566287 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1dcfa0e5_1712_4411_afe5_e922c185b120.slice/crio-8a67c64e8499e331968e6f63acd9ee0c0c61e3174a9cabbb05be7ed9e60a19d3 WatchSource:0}: Error finding container 8a67c64e8499e331968e6f63acd9ee0c0c61e3174a9cabbb05be7ed9e60a19d3: Status 404 returned error can't find the container with id 8a67c64e8499e331968e6f63acd9ee0c0c61e3174a9cabbb05be7ed9e60a19d3 Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.601665 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:14 crc kubenswrapper[4812]: E0216 13:34:14.601856 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:15.101837698 +0000 UTC m=+144.166168399 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.603040 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:14 crc kubenswrapper[4812]: E0216 13:34:14.603486 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:15.103465569 +0000 UTC m=+144.167796270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.637154 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h69cg" event={"ID":"9437d039-7efe-4e41-810c-2cf9c324ae08","Type":"ContainerStarted","Data":"6569f75374d0175c2f53c1f991005fc79f0324f257867e77ab50fa8fa0241986"} Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.648349 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-942n4" event={"ID":"8f6a70e5-ea2c-431f-b749-bab49aa63442","Type":"ContainerStarted","Data":"78db6c16a0e5e9eef3dfd9aed2961e0227d4096af3c19f7ed61dba6c7410e157"} Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.675310 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-tpgqc" event={"ID":"d8f24d90-54d8-4344-8140-c9fa919b456a","Type":"ContainerStarted","Data":"44a7b19ee043c83429b1625f0f31f28cfa515c2e5745104244c3e3557f6bdfdb"} Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.675461 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-tpgqc" event={"ID":"d8f24d90-54d8-4344-8140-c9fa919b456a","Type":"ContainerStarted","Data":"c9a4a8fa70f753527518324fed561af1b274e0616d6a4cdded6e757866a0c53e"} Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.677774 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-7wvg2" event={"ID":"cabaed27-8848-4061-9644-ff60ca94389c","Type":"ContainerStarted","Data":"e12d1714656b4f8abdc6132d3c71399d3ad1564055068c46c869c2c37a00a567"} Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.680316 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tnqj2" event={"ID":"c89a7bcd-9899-41bb-81c9-c1dc56f1fd6e","Type":"ContainerStarted","Data":"4a726caa87e0cd4c8d456b1ada2167aa9923426a5a34599c6c494d8ffcac8fb1"} Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.680462 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tnqj2" event={"ID":"c89a7bcd-9899-41bb-81c9-c1dc56f1fd6e","Type":"ContainerStarted","Data":"84d793e1673fee51d7e2cf285c5c68e6d5e5196a5cc863e0737837242d31671b"} Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.688281 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-sv88f" event={"ID":"c221ee5f-91c7-4ca7-9567-55cd7bd72beb","Type":"ContainerStarted","Data":"5d4420b0c073998b13329c6a80e31a730fa5f3e63a44148f3ee2f74a30ace67f"} Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.688329 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-sv88f" event={"ID":"c221ee5f-91c7-4ca7-9567-55cd7bd72beb","Type":"ContainerStarted","Data":"5cc88ce29e1f85ba314b9b1a794f2573b56866a5d50fc5088fbdbfd7af73a0dc"} Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.688346 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-sv88f" Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.691179 4812 patch_prober.go:28] interesting pod/downloads-7954f5f757-sv88f container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.691230 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sv88f" podUID="c221ee5f-91c7-4ca7-9567-55cd7bd72beb" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.694563 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v2s8n" event={"ID":"c91e8dc1-671e-4e1c-8e09-f2f8cec9cf34","Type":"ContainerStarted","Data":"788af7ed7df51e2027f8bfb4867939ab593c8ced00ffff07afae1906446a717a"} Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.698069 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4cx9t" event={"ID":"3d90f788-7a7d-4ccd-9e4c-8a0fdcfdd479","Type":"ContainerStarted","Data":"e7df5f2768ad833eecee846d2ec66a0c41ff8d20eadd08b15ed9f90b205a2e5e"} Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.706118 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:14 crc kubenswrapper[4812]: E0216 13:34:14.706553 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:15.206533529 +0000 UTC m=+144.270864230 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.707591 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gb9bh" event={"ID":"1f368631-7f8d-4004-a36c-38cb52391cb4","Type":"ContainerStarted","Data":"8664cde524e4be6cd4d7624cf42ce8cd80cfb7912ff561a18cc847cf755a0d3d"} Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.713594 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5sss2" event={"ID":"1dcfa0e5-1712-4411-afe5-e922c185b120","Type":"ContainerStarted","Data":"8a67c64e8499e331968e6f63acd9ee0c0c61e3174a9cabbb05be7ed9e60a19d3"} Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.735353 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-b7psd" event={"ID":"5bd1b4d8-80f4-4044-891b-a5e3450a0f48","Type":"ContainerStarted","Data":"462a091ea586a0d04e00010714484464abb5dff021dd196141a76717b206019d"} Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.741143 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" event={"ID":"5245eea2-0039-4127-bd35-5d4ab5204b62","Type":"ContainerStarted","Data":"bb8177475cd3dedf523712b692ebc348f761ec602559ca31f6fb974cc5887146"} Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.742555 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.743938 4812 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-4mg2p container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" start-of-body= Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.744093 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" podUID="5245eea2-0039-4127-bd35-5d4ab5204b62" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.745302 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8mcbf" event={"ID":"a6a96223-8094-41a8-a311-231ef35ac6b2","Type":"ContainerStarted","Data":"443dfdb0b86e647515890a5a216291617cda45c20924f2f9ac7025c8afe67024"} Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.746748 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-nplvk" event={"ID":"ae5e2f5e-3826-45ce-a7af-a2670fcc41ab","Type":"ContainerStarted","Data":"1096d9432d0da917c4ea7ee9afe95a069892226c877a77c168341171d523b9a8"} Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.746782 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-nplvk" event={"ID":"ae5e2f5e-3826-45ce-a7af-a2670fcc41ab","Type":"ContainerStarted","Data":"a17e7c1c5583544d5db2131bcefb8ee0de75667a2950994c81947db1c1d49f52"} Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.747700 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-nplvk" Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.749627 4812 patch_prober.go:28] interesting pod/console-operator-58897d9998-nplvk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.749676 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-nplvk" podUID="ae5e2f5e-3826-45ce-a7af-a2670fcc41ab" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.772175 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-plrxx" event={"ID":"18c621e3-e734-428a-9bf7-930f8d450c8e","Type":"ContainerStarted","Data":"478c8e1af288d9e6267bd86d3ee12f439a1cfcce75ae333d28ef8dfd209997d7"} Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.774566 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" event={"ID":"7f4d6c63-7c73-4fae-8738-04def1b3e5e3","Type":"ContainerStarted","Data":"69b83dd109e97c50fd43d58b9b164639557ce213a5b2749238a499bfc33cddda"} Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.778181 4812 generic.go:334] "Generic (PLEG): container finished" podID="97376a54-e945-445a-b5fe-b2b658705dc5" containerID="403ffdab4428ff197a43e1a11a9cd75435e4fbd00c15f73bc652187de25d1955" exitCode=0 Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.778249 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ttfl" event={"ID":"97376a54-e945-445a-b5fe-b2b658705dc5","Type":"ContainerDied","Data":"403ffdab4428ff197a43e1a11a9cd75435e4fbd00c15f73bc652187de25d1955"} Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.778275 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ttfl" event={"ID":"97376a54-e945-445a-b5fe-b2b658705dc5","Type":"ContainerStarted","Data":"05d4ee631edc51b8c2b89bd55b58d38048fa3a86b231af0becedf1b42b917c0f"} Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.781650 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-72lrh" event={"ID":"5be0ecd5-70de-4fa9-abcc-685cef55d530","Type":"ContainerStarted","Data":"c95b1163627d12ca101b1b86869f286aa23046af728713ee49a85d4a096302fb"} Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.781687 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-72lrh" event={"ID":"5be0ecd5-70de-4fa9-abcc-685cef55d530","Type":"ContainerStarted","Data":"40bbe3d8760dfc430533c728e1a01a8b67fffb9c377f58ce09d446332c3938a7"} Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.782773 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-72lrh" Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.784127 4812 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-72lrh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.784170 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-72lrh" podUID="5be0ecd5-70de-4fa9-abcc-685cef55d530" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.808596 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:14 crc kubenswrapper[4812]: E0216 13:34:14.813386 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:15.313372086 +0000 UTC m=+144.377702787 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.909968 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:14 crc kubenswrapper[4812]: E0216 13:34:14.910151 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:15.41012726 +0000 UTC m=+144.474457961 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.910640 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:14 crc kubenswrapper[4812]: E0216 13:34:14.911342 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:15.411332327 +0000 UTC m=+144.475663028 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.915677 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-b7psd" Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.943011 4812 patch_prober.go:28] interesting pod/router-default-5444994796-b7psd container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 16 13:34:14 crc kubenswrapper[4812]: I0216 13:34:14.943274 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b7psd" podUID="5bd1b4d8-80f4-4044-891b-a5e3450a0f48" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.011735 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:15 crc kubenswrapper[4812]: E0216 13:34:15.012690 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:15.512675444 +0000 UTC m=+144.577006135 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.034511 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-pxwbj"] Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.042654 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-nwjmg"] Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.058751 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g8zmc"] Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.091461 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-6c7g6"] Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.123233 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:15 crc kubenswrapper[4812]: E0216 13:34:15.124002 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:15.62398367 +0000 UTC m=+144.688314361 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.225181 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:15 crc kubenswrapper[4812]: E0216 13:34:15.225630 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:15.725611005 +0000 UTC m=+144.789941706 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.247092 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dqt5h"] Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.343342 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:15 crc kubenswrapper[4812]: E0216 13:34:15.343928 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:15.843915448 +0000 UTC m=+144.908246149 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.444438 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:15 crc kubenswrapper[4812]: E0216 13:34:15.444858 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:15.944837232 +0000 UTC m=+145.009167933 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.543493 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6n6fc"] Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.548648 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.548978 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8ngmk"] Feb 16 13:34:15 crc kubenswrapper[4812]: E0216 13:34:15.549013 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:16.048997736 +0000 UTC m=+145.113328437 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.582596 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-w525k"] Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.587767 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v7f6l"] Feb 16 13:34:15 crc kubenswrapper[4812]: W0216 13:34:15.628558 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8dba5fdd_d62c_41c5_9550_d98118b3b1a1.slice/crio-15d93ef5edc743edeb976ae5187b7b68cdedd85c96fbc343db8cf8be4ea34abc WatchSource:0}: Error finding container 15d93ef5edc743edeb976ae5187b7b68cdedd85c96fbc343db8cf8be4ea34abc: Status 404 returned error can't find the container with id 15d93ef5edc743edeb976ae5187b7b68cdedd85c96fbc343db8cf8be4ea34abc Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.650225 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:15 crc kubenswrapper[4812]: E0216 13:34:15.650362 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:16.150331222 +0000 UTC m=+145.214661923 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.650515 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:15 crc kubenswrapper[4812]: E0216 13:34:15.650981 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:16.150970862 +0000 UTC m=+145.215301563 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.705970 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kc7dg"] Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.718956 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-h57x4"] Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.727984 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2hqkd"] Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.729876 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tkjwb"] Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.732676 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j84fb"] Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.751631 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:15 crc kubenswrapper[4812]: E0216 13:34:15.752071 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:16.25204979 +0000 UTC m=+145.316380491 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.762952 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-gjtcl"] Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.762996 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-jrx24"] Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.799525 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v2s8n" event={"ID":"c91e8dc1-671e-4e1c-8e09-f2f8cec9cf34","Type":"ContainerStarted","Data":"935a2f4b1155cfae3601b7a8aa26004de4957c41521ad1b3cc678037343f88e3"} Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.805849 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-nwjmg" event={"ID":"13167121-9190-4ef3-b635-d528457b4c53","Type":"ContainerStarted","Data":"7ff0ba6aa569f6c49ad878c86b1e9a6fec2319d94df29de4577aa01b3a975f31"} Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.809749 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-klckm"] Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.810842 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-46895" event={"ID":"9cf0c1ed-445f-4f9c-a8c3-c903d559de4d","Type":"ContainerStarted","Data":"9d39361893a4bbfd7a7d23f6cc57cb5b7c52b990f7ba70c593019d6587a5e219"} Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.810879 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-46895" event={"ID":"9cf0c1ed-445f-4f9c-a8c3-c903d559de4d","Type":"ContainerStarted","Data":"9e069ba293a25fbc2e44b58700826283a59434b56902736719addd2e26ab9a44"} Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.814532 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6n6fc" event={"ID":"057b7c44-3b11-4a03-8325-0b3819b55f6f","Type":"ContainerStarted","Data":"751a5db7260662d4879f029c08e6cbeb819be8581bf7253494cbd12b8d500218"} Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.818420 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8ngmk" event={"ID":"9d7cab04-b239-4a4c-b3da-ba280200cd57","Type":"ContainerStarted","Data":"6942c6077a45d6e97bc8bfecb2f2febf06dc94a0d0b2c039b018eb83a6b49fb6"} Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.824964 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g8zmc" event={"ID":"9c80fa26-a106-41fa-b66d-53954e1b233b","Type":"ContainerStarted","Data":"cd105da179d748d4ca451bd9495f98ba003488974c6cffca36217e0dbadc66fb"} Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.825031 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g8zmc" event={"ID":"9c80fa26-a106-41fa-b66d-53954e1b233b","Type":"ContainerStarted","Data":"4bfa2198f8d60a1626ecd3ae388595fe21cbb437d65647d4050e1acb53052290"} Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.843852 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-pxwbj" event={"ID":"a3f70bd4-a15e-44dd-a610-ee085e108403","Type":"ContainerStarted","Data":"46b8838eb3d20270e67fa4afbdb0df533b008b8dce905a7f463e67b2911f0a74"} Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.843896 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-pxwbj" event={"ID":"a3f70bd4-a15e-44dd-a610-ee085e108403","Type":"ContainerStarted","Data":"4806de5e178026b3660f21c579b07fdb2531cc083ea801ffcfdaad44b14c37cf"} Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.854769 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:15 crc kubenswrapper[4812]: E0216 13:34:15.855059 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:16.355037708 +0000 UTC m=+145.419368409 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.856166 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-ll6fj"] Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.863534 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wknkb"] Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.871136 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b5crl"] Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.877488 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2hqkd" event={"ID":"b26542fa-2c38-47d7-984b-e51679e600c4","Type":"ContainerStarted","Data":"20e840f35deb8349fcdb3a833332482e070565d154fc61958f0113f49d2e6093"} Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.905952 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v2s8n" podStartSLOduration=122.905930468 podStartE2EDuration="2m2.905930468s" podCreationTimestamp="2026-02-16 13:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:15.901471769 +0000 UTC m=+144.965802480" watchObservedRunningTime="2026-02-16 13:34:15.905930468 +0000 UTC m=+144.970261199" Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.919539 4812 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-5sss2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.919580 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5sss2" podUID="1dcfa0e5-1712-4411-afe5-e922c185b120" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.920280 4812 patch_prober.go:28] interesting pod/router-default-5444994796-b7psd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 13:34:15 crc kubenswrapper[4812]: [-]has-synced failed: reason withheld Feb 16 13:34:15 crc kubenswrapper[4812]: [+]process-running ok Feb 16 13:34:15 crc kubenswrapper[4812]: healthz check failed Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.920303 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b7psd" podUID="5bd1b4d8-80f4-4044-891b-a5e3450a0f48" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.920480 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-b7psd" event={"ID":"5bd1b4d8-80f4-4044-891b-a5e3450a0f48","Type":"ContainerStarted","Data":"18502c244850a300295c4d288cb26770e27ea00f2a7ee1b8857266bff0d16b57"} Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.920517 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520810-vq6f4"] Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.920535 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5sss2" Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.920548 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tnqj2" event={"ID":"c89a7bcd-9899-41bb-81c9-c1dc56f1fd6e","Type":"ContainerStarted","Data":"62ab20409b15cf595f67b5f5fb6db4dc1355dc0cba9ab43b867c4700b92112f5"} Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.920557 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-w525k" event={"ID":"8dba5fdd-d62c-41c5-9550-d98118b3b1a1","Type":"ContainerStarted","Data":"15d93ef5edc743edeb976ae5187b7b68cdedd85c96fbc343db8cf8be4ea34abc"} Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.920567 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5sss2" event={"ID":"1dcfa0e5-1712-4411-afe5-e922c185b120","Type":"ContainerStarted","Data":"f06a89a22d1996e3d6cbc87180eb6522ce46d81b424eaf1c827c0bd87358783f"} Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.920575 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h69cg" event={"ID":"9437d039-7efe-4e41-810c-2cf9c324ae08","Type":"ContainerStarted","Data":"150216abc247ba9fda013a8db514560e47d02fe28f03505fed9ea1c6af097513"} Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.924399 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-plrxx" event={"ID":"18c621e3-e734-428a-9bf7-930f8d450c8e","Type":"ContainerStarted","Data":"f21ea76e874faed95cbf0e1a1437439a4e97ad1bcdda5f8392ceacc9520fa4f8"} Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.926694 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" event={"ID":"7f4d6c63-7c73-4fae-8738-04def1b3e5e3","Type":"ContainerStarted","Data":"0f2f5e57d1eca4f0780131b62eacee99a98d42b4c23471d7e7e829506c08ae99"} Feb 16 13:34:15 crc kubenswrapper[4812]: W0216 13:34:15.926909 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84482702_f4be_41ce_98c6_eb5161d23ba0.slice/crio-d6d457ebb7c2e31882ec6fce4439632367cd030de6246986192e6b0105616746 WatchSource:0}: Error finding container d6d457ebb7c2e31882ec6fce4439632367cd030de6246986192e6b0105616746: Status 404 returned error can't find the container with id d6d457ebb7c2e31882ec6fce4439632367cd030de6246986192e6b0105616746 Feb 16 13:34:15 crc kubenswrapper[4812]: W0216 13:34:15.928918 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod110cad9b_2348_4b66_a432_4461f3bd77c6.slice/crio-1f56f77b863c40d26095d5a8a4767074183ca7b9ce1c91bca950e809075f1a22 WatchSource:0}: Error finding container 1f56f77b863c40d26095d5a8a4767074183ca7b9ce1c91bca950e809075f1a22: Status 404 returned error can't find the container with id 1f56f77b863c40d26095d5a8a4767074183ca7b9ce1c91bca950e809075f1a22 Feb 16 13:34:15 crc kubenswrapper[4812]: W0216 13:34:15.930515 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ece23bb_e939_4912_99fc_ea54a7c7336e.slice/crio-84d214735e4872d0fd9cb713e915d5054ce23d982230d929744287608b29ea08 WatchSource:0}: Error finding container 84d214735e4872d0fd9cb713e915d5054ce23d982230d929744287608b29ea08: Status 404 returned error can't find the container with id 84d214735e4872d0fd9cb713e915d5054ce23d982230d929744287608b29ea08 Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.933585 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dqt5h" event={"ID":"d34aa26a-9b3b-463d-bea6-be2d12b5854c","Type":"ContainerStarted","Data":"3214ec2de8a6e3c32ffea9cb86b7806a15bebe72d25d45b4648c0c4229214bda"} Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.943817 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4cx9t" event={"ID":"3d90f788-7a7d-4ccd-9e4c-8a0fdcfdd479","Type":"ContainerStarted","Data":"c4ff26b9d3edd204c41e7f60be78bb5f34406f2b3cbda90ce9af318f72719e95"} Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.943909 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4cx9t" event={"ID":"3d90f788-7a7d-4ccd-9e4c-8a0fdcfdd479","Type":"ContainerStarted","Data":"a0afb57178be6f099d7cd7ad61fb312884b3110038df4ec4f7531062e6858747"} Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.945146 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-tpgqc" podStartSLOduration=122.945126555 podStartE2EDuration="2m2.945126555s" podCreationTimestamp="2026-02-16 13:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:15.941894595 +0000 UTC m=+145.006225296" watchObservedRunningTime="2026-02-16 13:34:15.945126555 +0000 UTC m=+145.009457256" Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.946303 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8mcbf" event={"ID":"a6a96223-8094-41a8-a311-231ef35ac6b2","Type":"ContainerStarted","Data":"6f6183e24d917cc59a80b2ff0ab3fea556872208ca43ccb74315a801a86f6214"} Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.949714 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-qfjbm" event={"ID":"939c437d-1347-489e-bb4a-1b783a62d707","Type":"ContainerStarted","Data":"db8e6031833403cf6ca7d14b48fe6165a2bbf38db821b9cf9bf41ee9be2aaf6a"} Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.949744 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-qfjbm" event={"ID":"939c437d-1347-489e-bb4a-1b783a62d707","Type":"ContainerStarted","Data":"0d5e5596295f0e8574a639bb9903853513080af56a4c166aa4808c5ace84315a"} Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.950923 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-6c7g6" event={"ID":"a3c1e652-a321-42b1-b658-624e89a01eb3","Type":"ContainerStarted","Data":"96b2fe9c582ce60b1bc533d1a677d26c0d445273b8c45023ecb8594378489ad2"} Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.952063 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-7wvg2" event={"ID":"cabaed27-8848-4061-9644-ff60ca94389c","Type":"ContainerStarted","Data":"17a9a7c425b05cd8c26b3c01ab2b1a6a1d309502e0e195bf478354702fcc2095"} Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.954472 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-942n4" event={"ID":"8f6a70e5-ea2c-431f-b749-bab49aa63442","Type":"ContainerStarted","Data":"6d5962f0ce652d493f24d132bb7fc215c27337f2427f603118c816bae79d48ae"} Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.954492 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-942n4" event={"ID":"8f6a70e5-ea2c-431f-b749-bab49aa63442","Type":"ContainerStarted","Data":"cd99b013f25230b4b09874508cd5fc05854de6d1b8c305e55e8fd7aee05d86c7"} Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.955063 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:15 crc kubenswrapper[4812]: E0216 13:34:15.955419 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:16.455389363 +0000 UTC m=+145.519720104 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.955757 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:15 crc kubenswrapper[4812]: E0216 13:34:15.956064 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:16.456054474 +0000 UTC m=+145.520385245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.956691 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j84fb" event={"ID":"08ecaa84-5c71-4570-8f7f-d753d2eeb9ab","Type":"ContainerStarted","Data":"30659b25834aa292898850ae8eb9aceb7f0cd33831476878f952397f89b2f2d3"} Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.958362 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v7f6l" event={"ID":"c03df561-6085-44d5-a33c-3c01a749858e","Type":"ContainerStarted","Data":"e32cbf5fde4e46739987655541e7adb10fdca5a3273a3b1cd5094a9b09e178b2"} Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.962499 4812 generic.go:334] "Generic (PLEG): container finished" podID="1f368631-7f8d-4004-a36c-38cb52391cb4" containerID="652dee1c9c0b8c7e4c94d0e6d391e24fe9a521b5ea88b5b44d89a2d67549f8ab" exitCode=0 Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.962840 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gb9bh" event={"ID":"1f368631-7f8d-4004-a36c-38cb52391cb4","Type":"ContainerStarted","Data":"fafe93895e4c7bc469d889e05ff7109ac090167d09b8f446b5879e03b38dd161"} Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.962894 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gb9bh" event={"ID":"1f368631-7f8d-4004-a36c-38cb52391cb4","Type":"ContainerDied","Data":"652dee1c9c0b8c7e4c94d0e6d391e24fe9a521b5ea88b5b44d89a2d67549f8ab"} Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.966878 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ttfl" event={"ID":"97376a54-e945-445a-b5fe-b2b658705dc5","Type":"ContainerStarted","Data":"4f958e626d63bc747ffbc4367ae592209e6f484388cadb287a6e61ea57f1ef32"} Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.967073 4812 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-72lrh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.967120 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-72lrh" podUID="5be0ecd5-70de-4fa9-abcc-685cef55d530" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.969674 4812 patch_prober.go:28] interesting pod/console-operator-58897d9998-nplvk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.969689 4812 patch_prober.go:28] interesting pod/downloads-7954f5f757-sv88f container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.969705 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-nplvk" podUID="ae5e2f5e-3826-45ce-a7af-a2670fcc41ab" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.969736 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sv88f" podUID="c221ee5f-91c7-4ca7-9567-55cd7bd72beb" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 16 13:34:15 crc kubenswrapper[4812]: I0216 13:34:15.972508 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:34:16 crc kubenswrapper[4812]: I0216 13:34:16.022066 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-sv88f" podStartSLOduration=123.022046173 podStartE2EDuration="2m3.022046173s" podCreationTimestamp="2026-02-16 13:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:15.978564303 +0000 UTC m=+145.042895004" watchObservedRunningTime="2026-02-16 13:34:16.022046173 +0000 UTC m=+145.086376894" Feb 16 13:34:16 crc kubenswrapper[4812]: I0216 13:34:16.057394 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:16 crc kubenswrapper[4812]: E0216 13:34:16.058308 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:16.558278648 +0000 UTC m=+145.622609349 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:16 crc kubenswrapper[4812]: I0216 13:34:16.058709 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:16 crc kubenswrapper[4812]: I0216 13:34:16.064237 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-nplvk" podStartSLOduration=123.064224283 podStartE2EDuration="2m3.064224283s" podCreationTimestamp="2026-02-16 13:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:16.022804697 +0000 UTC m=+145.087135418" watchObservedRunningTime="2026-02-16 13:34:16.064224283 +0000 UTC m=+145.128554994" Feb 16 13:34:16 crc kubenswrapper[4812]: I0216 13:34:16.066045 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" podStartSLOduration=123.066038329 podStartE2EDuration="2m3.066038329s" podCreationTimestamp="2026-02-16 13:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:16.064045217 +0000 UTC m=+145.128375918" watchObservedRunningTime="2026-02-16 13:34:16.066038329 +0000 UTC m=+145.130369030" Feb 16 13:34:16 crc kubenswrapper[4812]: E0216 13:34:16.066765 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:16.566745741 +0000 UTC m=+145.631076442 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:16 crc kubenswrapper[4812]: I0216 13:34:16.101456 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-72lrh" podStartSLOduration=123.101412477 podStartE2EDuration="2m3.101412477s" podCreationTimestamp="2026-02-16 13:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:16.100787778 +0000 UTC m=+145.165118479" watchObservedRunningTime="2026-02-16 13:34:16.101412477 +0000 UTC m=+145.165743178" Feb 16 13:34:16 crc kubenswrapper[4812]: I0216 13:34:16.139426 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-b7psd" podStartSLOduration=123.139412797 podStartE2EDuration="2m3.139412797s" podCreationTimestamp="2026-02-16 13:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:16.139046486 +0000 UTC m=+145.203377187" watchObservedRunningTime="2026-02-16 13:34:16.139412797 +0000 UTC m=+145.203743498" Feb 16 13:34:16 crc kubenswrapper[4812]: I0216 13:34:16.160095 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:16 crc kubenswrapper[4812]: E0216 13:34:16.160296 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:16.660269865 +0000 UTC m=+145.724600566 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:16 crc kubenswrapper[4812]: I0216 13:34:16.160568 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:16 crc kubenswrapper[4812]: E0216 13:34:16.160893 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:16.660876884 +0000 UTC m=+145.725207585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:16 crc kubenswrapper[4812]: I0216 13:34:16.261773 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:16 crc kubenswrapper[4812]: E0216 13:34:16.262097 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:16.762081556 +0000 UTC m=+145.826412257 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:16 crc kubenswrapper[4812]: I0216 13:34:16.295796 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tnqj2" podStartSLOduration=123.295781102 podStartE2EDuration="2m3.295781102s" podCreationTimestamp="2026-02-16 13:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:16.257325438 +0000 UTC m=+145.321656149" watchObservedRunningTime="2026-02-16 13:34:16.295781102 +0000 UTC m=+145.360111803" Feb 16 13:34:16 crc kubenswrapper[4812]: I0216 13:34:16.296467 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-qfjbm" podStartSLOduration=5.296434032 podStartE2EDuration="5.296434032s" podCreationTimestamp="2026-02-16 13:34:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:16.293729708 +0000 UTC m=+145.358060419" watchObservedRunningTime="2026-02-16 13:34:16.296434032 +0000 UTC m=+145.360764733" Feb 16 13:34:16 crc kubenswrapper[4812]: I0216 13:34:16.335437 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h69cg" podStartSLOduration=123.335419243 podStartE2EDuration="2m3.335419243s" podCreationTimestamp="2026-02-16 13:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:16.334879306 +0000 UTC m=+145.399210007" watchObservedRunningTime="2026-02-16 13:34:16.335419243 +0000 UTC m=+145.399749944" Feb 16 13:34:16 crc kubenswrapper[4812]: I0216 13:34:16.363776 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:16 crc kubenswrapper[4812]: E0216 13:34:16.364056 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:16.864045162 +0000 UTC m=+145.928375863 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:16 crc kubenswrapper[4812]: I0216 13:34:16.387337 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5sss2" podStartSLOduration=122.387319294 podStartE2EDuration="2m2.387319294s" podCreationTimestamp="2026-02-16 13:32:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:16.382879926 +0000 UTC m=+145.447210617" watchObservedRunningTime="2026-02-16 13:34:16.387319294 +0000 UTC m=+145.451649995" Feb 16 13:34:16 crc kubenswrapper[4812]: I0216 13:34:16.414616 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-7wvg2" podStartSLOduration=123.414596321 podStartE2EDuration="2m3.414596321s" podCreationTimestamp="2026-02-16 13:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:16.413689963 +0000 UTC m=+145.478020674" watchObservedRunningTime="2026-02-16 13:34:16.414596321 +0000 UTC m=+145.478927022" Feb 16 13:34:16 crc kubenswrapper[4812]: I0216 13:34:16.453021 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-plrxx" podStartSLOduration=123.453004084 podStartE2EDuration="2m3.453004084s" podCreationTimestamp="2026-02-16 13:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:16.452406415 +0000 UTC m=+145.516737136" watchObservedRunningTime="2026-02-16 13:34:16.453004084 +0000 UTC m=+145.517334785" Feb 16 13:34:16 crc kubenswrapper[4812]: I0216 13:34:16.465436 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:16 crc kubenswrapper[4812]: E0216 13:34:16.465910 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:16.965892334 +0000 UTC m=+146.030223045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:16 crc kubenswrapper[4812]: I0216 13:34:16.492588 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8mcbf" podStartSLOduration=123.492564442 podStartE2EDuration="2m3.492564442s" podCreationTimestamp="2026-02-16 13:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:16.490232979 +0000 UTC m=+145.554563700" watchObservedRunningTime="2026-02-16 13:34:16.492564442 +0000 UTC m=+145.556895143" Feb 16 13:34:16 crc kubenswrapper[4812]: I0216 13:34:16.567154 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:16 crc kubenswrapper[4812]: E0216 13:34:16.567542 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:17.067526519 +0000 UTC m=+146.131857210 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:16 crc kubenswrapper[4812]: I0216 13:34:16.668671 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:16 crc kubenswrapper[4812]: E0216 13:34:16.669100 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:17.169069332 +0000 UTC m=+146.233400033 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:16 crc kubenswrapper[4812]: I0216 13:34:16.770041 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:16 crc kubenswrapper[4812]: E0216 13:34:16.770406 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:17.270394178 +0000 UTC m=+146.334724879 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:16 crc kubenswrapper[4812]: I0216 13:34:16.871534 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:16 crc kubenswrapper[4812]: E0216 13:34:16.871830 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:17.371805506 +0000 UTC m=+146.436136207 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:16 crc kubenswrapper[4812]: I0216 13:34:16.872019 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:16 crc kubenswrapper[4812]: E0216 13:34:16.872491 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:17.372474786 +0000 UTC m=+146.436805487 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:16 crc kubenswrapper[4812]: I0216 13:34:16.921579 4812 patch_prober.go:28] interesting pod/router-default-5444994796-b7psd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 13:34:16 crc kubenswrapper[4812]: [-]has-synced failed: reason withheld Feb 16 13:34:16 crc kubenswrapper[4812]: [+]process-running ok Feb 16 13:34:16 crc kubenswrapper[4812]: healthz check failed Feb 16 13:34:16 crc kubenswrapper[4812]: I0216 13:34:16.921985 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b7psd" podUID="5bd1b4d8-80f4-4044-891b-a5e3450a0f48" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 13:34:16 crc kubenswrapper[4812]: I0216 13:34:16.972601 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:16 crc kubenswrapper[4812]: E0216 13:34:16.972899 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:17.472884824 +0000 UTC m=+146.537215525 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.019232 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-6c7g6" event={"ID":"a3c1e652-a321-42b1-b658-624e89a01eb3","Type":"ContainerStarted","Data":"d4036f0ffc5bc383703762fb16c3f860687cf94b8b2e3d43fa613118cbc6f89f"} Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.041406 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wknkb" event={"ID":"110cad9b-2348-4b66-a432-4461f3bd77c6","Type":"ContainerStarted","Data":"3656b0e71d8810c751efdbcc2f52fe3c8b6360f84efd814dafe8d727e548c04e"} Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.042468 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wknkb" event={"ID":"110cad9b-2348-4b66-a432-4461f3bd77c6","Type":"ContainerStarted","Data":"1f56f77b863c40d26095d5a8a4767074183ca7b9ce1c91bca950e809075f1a22"} Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.059601 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-6c7g6" podStartSLOduration=7.059583636 podStartE2EDuration="7.059583636s" podCreationTimestamp="2026-02-16 13:34:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:17.058845473 +0000 UTC m=+146.123176174" watchObservedRunningTime="2026-02-16 13:34:17.059583636 +0000 UTC m=+146.123914337" Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.082330 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:17 crc kubenswrapper[4812]: E0216 13:34:17.083052 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:17.583032614 +0000 UTC m=+146.647363315 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.086003 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wknkb" podStartSLOduration=124.085982095 podStartE2EDuration="2m4.085982095s" podCreationTimestamp="2026-02-16 13:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:17.085076937 +0000 UTC m=+146.149407638" watchObservedRunningTime="2026-02-16 13:34:17.085982095 +0000 UTC m=+146.150312796" Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.106680 4812 generic.go:334] "Generic (PLEG): container finished" podID="7f4d6c63-7c73-4fae-8738-04def1b3e5e3" containerID="0f2f5e57d1eca4f0780131b62eacee99a98d42b4c23471d7e7e829506c08ae99" exitCode=0 Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.106751 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" event={"ID":"7f4d6c63-7c73-4fae-8738-04def1b3e5e3","Type":"ContainerDied","Data":"0f2f5e57d1eca4f0780131b62eacee99a98d42b4c23471d7e7e829506c08ae99"} Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.141042 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v7f6l" event={"ID":"c03df561-6085-44d5-a33c-3c01a749858e","Type":"ContainerStarted","Data":"c046478fcdc6c5956a002857e115c269647f70287c7dda25557ab8d36c4f6a4f"} Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.192039 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:17 crc kubenswrapper[4812]: E0216 13:34:17.192392 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:17.692371339 +0000 UTC m=+146.756702030 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.192926 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:17 crc kubenswrapper[4812]: E0216 13:34:17.193313 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:17.693295637 +0000 UTC m=+146.757626398 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.208910 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-nwjmg" event={"ID":"13167121-9190-4ef3-b635-d528457b4c53","Type":"ContainerStarted","Data":"e91502571f661c6e2701d1af3fa201f5043c49371beb291276f288d1d3830fd7"} Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.233609 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tkjwb" event={"ID":"db15826a-b0d8-4fb5-9a69-35ae6888b029","Type":"ContainerStarted","Data":"352e7d12cd052454b4dd1435100d5df29c8fc16434ef810968f27d64eb5a3b27"} Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.233664 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tkjwb" event={"ID":"db15826a-b0d8-4fb5-9a69-35ae6888b029","Type":"ContainerStarted","Data":"d9f01f4ed71dcf72b0b61cc579c3c0246a6225b35ce8243cc49b04059f4c8a46"} Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.257125 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jrx24" event={"ID":"7be72ca7-b8da-4034-b6e0-16218c2e793e","Type":"ContainerStarted","Data":"8280b2337e895df63443fac03ac836d9fb560c8db19ed4e27b0f50638afb3e06"} Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.257172 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jrx24" event={"ID":"7be72ca7-b8da-4034-b6e0-16218c2e793e","Type":"ContainerStarted","Data":"b0be47c2c2297297cbb09c7d3e04a90936a054894071092131f3245706c8f3d9"} Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.277180 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v7f6l" podStartSLOduration=124.277156881 podStartE2EDuration="2m4.277156881s" podCreationTimestamp="2026-02-16 13:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:17.191930575 +0000 UTC m=+146.256261276" watchObservedRunningTime="2026-02-16 13:34:17.277156881 +0000 UTC m=+146.341487592" Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.292780 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2hqkd" event={"ID":"b26542fa-2c38-47d7-984b-e51679e600c4","Type":"ContainerStarted","Data":"4afa3a1e94aafa960258da0b0a3436b939f174cf452c6a7db9cbb65eac1479d4"} Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.293689 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2hqkd" Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.293995 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:17 crc kubenswrapper[4812]: E0216 13:34:17.295242 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:17.795227552 +0000 UTC m=+146.859558253 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.308643 4812 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-2hqkd container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.308711 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2hqkd" podUID="b26542fa-2c38-47d7-984b-e51679e600c4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.342091 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jrx24" podStartSLOduration=123.342073457 podStartE2EDuration="2m3.342073457s" podCreationTimestamp="2026-02-16 13:32:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:17.28035303 +0000 UTC m=+146.344683731" watchObservedRunningTime="2026-02-16 13:34:17.342073457 +0000 UTC m=+146.406404148" Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.345684 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8ngmk" event={"ID":"9d7cab04-b239-4a4c-b3da-ba280200cd57","Type":"ContainerStarted","Data":"abf16424d3b08baef916d2c388c25de6c2f6088c6a2d2fb4a2e7841343f34e3f"} Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.364564 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-h57x4" event={"ID":"809592c1-c9ad-49f0-90a6-cea3bbebf136","Type":"ContainerStarted","Data":"11e83c0e2ca29a6ddd9960c6bb039a0ac323ead3ec7065b5d92c6c79aba13fbc"} Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.364619 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-h57x4" event={"ID":"809592c1-c9ad-49f0-90a6-cea3bbebf136","Type":"ContainerStarted","Data":"eb9096880609506c625be399bf4689bab2cc76a9bd7fe6145137300cc43be8df"} Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.376974 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8ngmk" podStartSLOduration=124.37695564 podStartE2EDuration="2m4.37695564s" podCreationTimestamp="2026-02-16 13:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:17.37503828 +0000 UTC m=+146.439368981" watchObservedRunningTime="2026-02-16 13:34:17.37695564 +0000 UTC m=+146.441286341" Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.377738 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2hqkd" podStartSLOduration=123.377732274 podStartE2EDuration="2m3.377732274s" podCreationTimestamp="2026-02-16 13:32:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:17.341986244 +0000 UTC m=+146.406316945" watchObservedRunningTime="2026-02-16 13:34:17.377732274 +0000 UTC m=+146.442062975" Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.379567 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-w525k" event={"ID":"8dba5fdd-d62c-41c5-9550-d98118b3b1a1","Type":"ContainerStarted","Data":"e27dbdf7d0d9f3d63e90ac3117aa9de1e120133784f28e128823d8c22d8d326b"} Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.398402 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:17 crc kubenswrapper[4812]: E0216 13:34:17.398755 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:17.898740996 +0000 UTC m=+146.963071697 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.406387 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-h57x4" podStartSLOduration=123.406365863 podStartE2EDuration="2m3.406365863s" podCreationTimestamp="2026-02-16 13:32:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:17.404519515 +0000 UTC m=+146.468850216" watchObservedRunningTime="2026-02-16 13:34:17.406365863 +0000 UTC m=+146.470696564" Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.417207 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b5crl" event={"ID":"84482702-f4be-41ce-98c6-eb5161d23ba0","Type":"ContainerStarted","Data":"454135f4fdc018d72dfe22fbca8721def375e48eb448aadaca199da36c86e403"} Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.417246 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b5crl" event={"ID":"84482702-f4be-41ce-98c6-eb5161d23ba0","Type":"ContainerStarted","Data":"d6d457ebb7c2e31882ec6fce4439632367cd030de6246986192e6b0105616746"} Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.430105 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-vq6f4" event={"ID":"fca937fd-eef1-4f91-b825-18d5429526a9","Type":"ContainerStarted","Data":"0358c19526fd9d5115b8ee38021054badf34997819a4529cb16cd73276d636d6"} Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.430151 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-vq6f4" event={"ID":"fca937fd-eef1-4f91-b825-18d5429526a9","Type":"ContainerStarted","Data":"66bd4ce751022289c637e05e55ad5ed14253c8f1fbf3cdd00c403ccdbb260e23"} Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.435888 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gjtcl" event={"ID":"88330cc0-3dd3-4ff7-8661-6a79d3e1667a","Type":"ContainerStarted","Data":"0369188401e1d4ad5f363370c37b4da66f76605b158b970f55ae7b06b1bfe966"} Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.435924 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gjtcl" event={"ID":"88330cc0-3dd3-4ff7-8661-6a79d3e1667a","Type":"ContainerStarted","Data":"6ee5dddecf65cc203c64c72d24941137eb17bb649ff00b785ad35d20da1e0a1f"} Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.461003 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ll6fj" event={"ID":"2ece23bb-e939-4912-99fc-ea54a7c7336e","Type":"ContainerStarted","Data":"6d41942337952b3c8e37125b2c40247398998d06e2205f016020198b07db090f"} Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.461054 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ll6fj" event={"ID":"2ece23bb-e939-4912-99fc-ea54a7c7336e","Type":"ContainerStarted","Data":"84d214735e4872d0fd9cb713e915d5054ce23d982230d929744287608b29ea08"} Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.463970 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kc7dg" event={"ID":"7147a2f9-6f8c-4fa5-b6da-a6a67a53e231","Type":"ContainerStarted","Data":"15d1583dafa02efaea6f9092a21449c4f85ba773664eb3dbf582c907b0eb91e7"} Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.464022 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kc7dg" event={"ID":"7147a2f9-6f8c-4fa5-b6da-a6a67a53e231","Type":"ContainerStarted","Data":"460297272a4ce6c46ac44814a6a9f2c00285028b21a8c551bc5fd3255afa82f8"} Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.465102 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-kc7dg" Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.467686 4812 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kc7dg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.467739 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-kc7dg" podUID="7147a2f9-6f8c-4fa5-b6da-a6a67a53e231" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.482980 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j84fb" event={"ID":"08ecaa84-5c71-4570-8f7f-d753d2eeb9ab","Type":"ContainerStarted","Data":"a44bb5301cb61ce295f614730ccc5398f5f2f1b189c7ff5ae2180ad6af7f740f"} Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.488555 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gjtcl" podStartSLOduration=124.488536274 podStartE2EDuration="2m4.488536274s" podCreationTimestamp="2026-02-16 13:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:17.487378318 +0000 UTC m=+146.551709029" watchObservedRunningTime="2026-02-16 13:34:17.488536274 +0000 UTC m=+146.552866975" Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.489984 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-vq6f4" podStartSLOduration=124.489975329 podStartE2EDuration="2m4.489975329s" podCreationTimestamp="2026-02-16 13:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:17.453395913 +0000 UTC m=+146.517726614" watchObservedRunningTime="2026-02-16 13:34:17.489975329 +0000 UTC m=+146.554306030" Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.492160 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6n6fc" event={"ID":"057b7c44-3b11-4a03-8325-0b3819b55f6f","Type":"ContainerStarted","Data":"1e68db4d2cd53154615c421af5fb2147a24ab455a76de9f9531b5cc0703cf41a"} Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.493263 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6n6fc" Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.496277 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-klckm" event={"ID":"89281b9f-7c51-470c-aa86-bdfd398f2a2a","Type":"ContainerStarted","Data":"6baccacc4b1944f632bcb7490d014f964b3df3597c890b289ba0ff2d69c2e5a8"} Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.497909 4812 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-g8zmc container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.497956 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g8zmc" podUID="9c80fa26-a106-41fa-b66d-53954e1b233b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.498304 4812 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-5sss2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.498326 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5sss2" podUID="1dcfa0e5-1712-4411-afe5-e922c185b120" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.498391 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g8zmc" Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.498420 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-klckm" event={"ID":"89281b9f-7c51-470c-aa86-bdfd398f2a2a","Type":"ContainerStarted","Data":"af97090a11feba013a3ede7efc53b048d7fbd2240cc87a50a6ff579274db1fea"} Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.498819 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:17 crc kubenswrapper[4812]: E0216 13:34:17.505159 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:18.005140169 +0000 UTC m=+147.069470870 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.514018 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-nplvk" Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.526608 4812 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-6n6fc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" start-of-body= Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.526659 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6n6fc" podUID="057b7c44-3b11-4a03-8325-0b3819b55f6f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.544145 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-kc7dg" podStartSLOduration=123.54412079 podStartE2EDuration="2m3.54412079s" podCreationTimestamp="2026-02-16 13:32:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:17.513516919 +0000 UTC m=+146.577847620" watchObservedRunningTime="2026-02-16 13:34:17.54412079 +0000 UTC m=+146.608451491" Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.544504 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j84fb" podStartSLOduration=124.544498111 podStartE2EDuration="2m4.544498111s" podCreationTimestamp="2026-02-16 13:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:17.541989323 +0000 UTC m=+146.606320024" watchObservedRunningTime="2026-02-16 13:34:17.544498111 +0000 UTC m=+146.608828812" Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.600503 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:17 crc kubenswrapper[4812]: E0216 13:34:17.604220 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:18.104207525 +0000 UTC m=+147.168538226 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.605506 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-4cx9t" podStartSLOduration=123.605484215 podStartE2EDuration="2m3.605484215s" podCreationTimestamp="2026-02-16 13:32:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:17.567736993 +0000 UTC m=+146.632067694" watchObservedRunningTime="2026-02-16 13:34:17.605484215 +0000 UTC m=+146.669814916" Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.676206 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-942n4" podStartSLOduration=124.67618243 podStartE2EDuration="2m4.67618243s" podCreationTimestamp="2026-02-16 13:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:17.630825032 +0000 UTC m=+146.695155733" watchObservedRunningTime="2026-02-16 13:34:17.67618243 +0000 UTC m=+146.740513131" Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.696339 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6n6fc" podStartSLOduration=123.696319905 podStartE2EDuration="2m3.696319905s" podCreationTimestamp="2026-02-16 13:32:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:17.693878589 +0000 UTC m=+146.758209310" watchObservedRunningTime="2026-02-16 13:34:17.696319905 +0000 UTC m=+146.760650606" Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.702112 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:17 crc kubenswrapper[4812]: E0216 13:34:17.702525 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:18.202504187 +0000 UTC m=+147.266834898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.719637 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g8zmc" podStartSLOduration=123.719619979 podStartE2EDuration="2m3.719619979s" podCreationTimestamp="2026-02-16 13:32:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:17.717635717 +0000 UTC m=+146.781966438" watchObservedRunningTime="2026-02-16 13:34:17.719619979 +0000 UTC m=+146.783950680" Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.742072 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-klckm" podStartSLOduration=123.742054645 podStartE2EDuration="2m3.742054645s" podCreationTimestamp="2026-02-16 13:32:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:17.740521228 +0000 UTC m=+146.804851939" watchObservedRunningTime="2026-02-16 13:34:17.742054645 +0000 UTC m=+146.806385346" Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.767309 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gb9bh" podStartSLOduration=123.767278548 podStartE2EDuration="2m3.767278548s" podCreationTimestamp="2026-02-16 13:32:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:17.766273437 +0000 UTC m=+146.830604138" watchObservedRunningTime="2026-02-16 13:34:17.767278548 +0000 UTC m=+146.831609249" Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.796211 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ttfl" podStartSLOduration=124.796192816 podStartE2EDuration="2m4.796192816s" podCreationTimestamp="2026-02-16 13:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:17.792905194 +0000 UTC m=+146.857235915" watchObservedRunningTime="2026-02-16 13:34:17.796192816 +0000 UTC m=+146.860523517" Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.804528 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:17 crc kubenswrapper[4812]: E0216 13:34:17.805078 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:18.305062841 +0000 UTC m=+147.369393542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.905423 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:17 crc kubenswrapper[4812]: E0216 13:34:17.905817 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:18.405777718 +0000 UTC m=+147.470108419 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.906066 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:17 crc kubenswrapper[4812]: E0216 13:34:17.906569 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:18.406557783 +0000 UTC m=+147.470888484 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.919290 4812 patch_prober.go:28] interesting pod/router-default-5444994796-b7psd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 13:34:17 crc kubenswrapper[4812]: [-]has-synced failed: reason withheld Feb 16 13:34:17 crc kubenswrapper[4812]: [+]process-running ok Feb 16 13:34:17 crc kubenswrapper[4812]: healthz check failed Feb 16 13:34:17 crc kubenswrapper[4812]: I0216 13:34:17.919369 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b7psd" podUID="5bd1b4d8-80f4-4044-891b-a5e3450a0f48" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.007293 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:18 crc kubenswrapper[4812]: E0216 13:34:18.007969 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:18.507950101 +0000 UTC m=+147.572280802 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.110510 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:18 crc kubenswrapper[4812]: E0216 13:34:18.111076 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:18.611061022 +0000 UTC m=+147.675391723 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.212064 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:18 crc kubenswrapper[4812]: E0216 13:34:18.212514 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:18.712499732 +0000 UTC m=+147.776830433 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.313518 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:18 crc kubenswrapper[4812]: E0216 13:34:18.313815 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:18.813804707 +0000 UTC m=+147.878135398 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.361724 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gb9bh" Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.361796 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gb9bh" Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.414919 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:18 crc kubenswrapper[4812]: E0216 13:34:18.415168 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:18.915139373 +0000 UTC m=+147.979470074 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.415384 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:18 crc kubenswrapper[4812]: E0216 13:34:18.415806 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:18.915795384 +0000 UTC m=+147.980126085 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.502317 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b5crl" event={"ID":"84482702-f4be-41ce-98c6-eb5161d23ba0","Type":"ContainerStarted","Data":"e628a21ad19cfad9f0eb8e92db8c19dd0a1ef6fb505d682b96f28753a16f1d9a"} Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.502462 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b5crl" Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.505416 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-46895" event={"ID":"9cf0c1ed-445f-4f9c-a8c3-c903d559de4d","Type":"ContainerStarted","Data":"bd44cecc4cf3dbeba50a3d8dd3e5e92bcf2846681f4cb479ea1ee9a24d1a60b4"} Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.507364 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ll6fj" event={"ID":"2ece23bb-e939-4912-99fc-ea54a7c7336e","Type":"ContainerStarted","Data":"f4575be227ad5c8d36a6170d2556d3516daa519c9ad5147b49b0e7944118efed"} Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.509410 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tkjwb" event={"ID":"db15826a-b0d8-4fb5-9a69-35ae6888b029","Type":"ContainerStarted","Data":"86804d873c351d14c149eb0130f501185a4a35ec923bc810d61f518335c7f925"} Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.511267 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-pxwbj" event={"ID":"a3f70bd4-a15e-44dd-a610-ee085e108403","Type":"ContainerStarted","Data":"f79b8f4a7f3ba36d8b29c1c5d86381565ac11f97b95ad47bf6407d057047d522"} Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.511631 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-pxwbj" Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.513119 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-w525k" event={"ID":"8dba5fdd-d62c-41c5-9550-d98118b3b1a1","Type":"ContainerStarted","Data":"54dff8d20922b4826b0bb5605d828c9babd191272539177fa9a149471d420c3d"} Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.514593 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dqt5h" event={"ID":"d34aa26a-9b3b-463d-bea6-be2d12b5854c","Type":"ContainerStarted","Data":"39c348e723aae76fc7ba35bf442e64938a19a070876bd19af0e3b793213fa8f9"} Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.516234 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:18 crc kubenswrapper[4812]: E0216 13:34:18.516356 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:19.016339095 +0000 UTC m=+148.080669796 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.516513 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:18 crc kubenswrapper[4812]: E0216 13:34:18.516834 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:19.01681931 +0000 UTC m=+148.081150011 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.517769 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gjtcl" event={"ID":"88330cc0-3dd3-4ff7-8661-6a79d3e1667a","Type":"ContainerStarted","Data":"b1526b56b645d7440ffc3f2dd5cd5fa706ab0c32ae5fc72a84dd7a5414967540"} Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.520698 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" event={"ID":"7f4d6c63-7c73-4fae-8738-04def1b3e5e3","Type":"ContainerStarted","Data":"178521f6346f795bfebd48b2e11111b9047d61c8cc262297bf27b899724e3fb7"} Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.520760 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" event={"ID":"7f4d6c63-7c73-4fae-8738-04def1b3e5e3","Type":"ContainerStarted","Data":"f672533a16e85365c82401319d588aa7e297f69125cb1176fdea672b2f47831f"} Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.523283 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-nwjmg" event={"ID":"13167121-9190-4ef3-b635-d528457b4c53","Type":"ContainerStarted","Data":"b1308a3d9969c6ec5f881e6a0085eae9f2273895fc479532098b8d3cc357b9c0"} Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.524186 4812 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kc7dg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.524221 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-kc7dg" podUID="7147a2f9-6f8c-4fa5-b6da-a6a67a53e231" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.524466 4812 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-2hqkd container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.524499 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2hqkd" podUID="b26542fa-2c38-47d7-984b-e51679e600c4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.524511 4812 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-g8zmc container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.524559 4812 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-6n6fc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" start-of-body= Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.524556 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g8zmc" podUID="9c80fa26-a106-41fa-b66d-53954e1b233b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.524580 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6n6fc" podUID="057b7c44-3b11-4a03-8325-0b3819b55f6f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.581892 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-46895" podStartSLOduration=124.58187447 podStartE2EDuration="2m4.58187447s" podCreationTimestamp="2026-02-16 13:32:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:18.579025112 +0000 UTC m=+147.643355813" watchObservedRunningTime="2026-02-16 13:34:18.58187447 +0000 UTC m=+147.646205171" Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.582182 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b5crl" podStartSLOduration=124.582177339 podStartE2EDuration="2m4.582177339s" podCreationTimestamp="2026-02-16 13:32:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:18.535222282 +0000 UTC m=+147.599552983" watchObservedRunningTime="2026-02-16 13:34:18.582177339 +0000 UTC m=+147.646508040" Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.606640 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tkjwb" podStartSLOduration=125.606619128 podStartE2EDuration="2m5.606619128s" podCreationTimestamp="2026-02-16 13:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:18.605931797 +0000 UTC m=+147.670262498" watchObservedRunningTime="2026-02-16 13:34:18.606619128 +0000 UTC m=+147.670949829" Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.617080 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:18 crc kubenswrapper[4812]: E0216 13:34:18.617269 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:19.117243658 +0000 UTC m=+148.181574359 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.618684 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:18 crc kubenswrapper[4812]: E0216 13:34:18.619953 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:19.119935772 +0000 UTC m=+148.184266533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.688610 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-w525k" podStartSLOduration=125.688587013 podStartE2EDuration="2m5.688587013s" podCreationTimestamp="2026-02-16 13:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:18.685172687 +0000 UTC m=+147.749503398" watchObservedRunningTime="2026-02-16 13:34:18.688587013 +0000 UTC m=+147.752917714" Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.722668 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:18 crc kubenswrapper[4812]: E0216 13:34:18.723032 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:19.223012922 +0000 UTC m=+148.287343633 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.791989 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ll6fj" podStartSLOduration=125.791969993 podStartE2EDuration="2m5.791969993s" podCreationTimestamp="2026-02-16 13:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:18.729086261 +0000 UTC m=+147.793416962" watchObservedRunningTime="2026-02-16 13:34:18.791969993 +0000 UTC m=+147.856300694" Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.800323 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-pxwbj" podStartSLOduration=7.800295062 podStartE2EDuration="7.800295062s" podCreationTimestamp="2026-02-16 13:34:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:18.785880014 +0000 UTC m=+147.850210715" watchObservedRunningTime="2026-02-16 13:34:18.800295062 +0000 UTC m=+147.864625773" Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.803195 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5sss2" Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.823859 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:18 crc kubenswrapper[4812]: E0216 13:34:18.824142 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:19.324130902 +0000 UTC m=+148.388461603 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.843160 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-nwjmg" podStartSLOduration=124.843138902 podStartE2EDuration="2m4.843138902s" podCreationTimestamp="2026-02-16 13:32:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:18.834835514 +0000 UTC m=+147.899166215" watchObservedRunningTime="2026-02-16 13:34:18.843138902 +0000 UTC m=+147.907469603" Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.908688 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" podStartSLOduration=125.908651856 podStartE2EDuration="2m5.908651856s" podCreationTimestamp="2026-02-16 13:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:18.905067425 +0000 UTC m=+147.969398126" watchObservedRunningTime="2026-02-16 13:34:18.908651856 +0000 UTC m=+147.972982567" Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.919081 4812 patch_prober.go:28] interesting pod/router-default-5444994796-b7psd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 13:34:18 crc kubenswrapper[4812]: [-]has-synced failed: reason withheld Feb 16 13:34:18 crc kubenswrapper[4812]: [+]process-running ok Feb 16 13:34:18 crc kubenswrapper[4812]: healthz check failed Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.919489 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b7psd" podUID="5bd1b4d8-80f4-4044-891b-a5e3450a0f48" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.925282 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:18 crc kubenswrapper[4812]: E0216 13:34:18.925649 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:19.425612373 +0000 UTC m=+148.489943084 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.933930 4812 csr.go:261] certificate signing request csr-wpv5h is approved, waiting to be issued Feb 16 13:34:18 crc kubenswrapper[4812]: I0216 13:34:18.942205 4812 csr.go:257] certificate signing request csr-wpv5h is issued Feb 16 13:34:19 crc kubenswrapper[4812]: I0216 13:34:19.027274 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:19 crc kubenswrapper[4812]: E0216 13:34:19.027590 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:19.527578148 +0000 UTC m=+148.591908839 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:19 crc kubenswrapper[4812]: I0216 13:34:19.128573 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:19 crc kubenswrapper[4812]: E0216 13:34:19.128706 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:19.628687458 +0000 UTC m=+148.693018169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:19 crc kubenswrapper[4812]: I0216 13:34:19.128949 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:19 crc kubenswrapper[4812]: E0216 13:34:19.129179 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:19.629172233 +0000 UTC m=+148.693502934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:19 crc kubenswrapper[4812]: I0216 13:34:19.230321 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:19 crc kubenswrapper[4812]: E0216 13:34:19.230471 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:19.730427536 +0000 UTC m=+148.794758237 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:19 crc kubenswrapper[4812]: I0216 13:34:19.230638 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:19 crc kubenswrapper[4812]: E0216 13:34:19.230929 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:19.730919412 +0000 UTC m=+148.795250113 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:19 crc kubenswrapper[4812]: I0216 13:34:19.331431 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:19 crc kubenswrapper[4812]: E0216 13:34:19.331647 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:19.831617098 +0000 UTC m=+148.895947809 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:19 crc kubenswrapper[4812]: I0216 13:34:19.331873 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:19 crc kubenswrapper[4812]: E0216 13:34:19.332174 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:19.832161425 +0000 UTC m=+148.896492126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:19 crc kubenswrapper[4812]: I0216 13:34:19.352158 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ttfl" Feb 16 13:34:19 crc kubenswrapper[4812]: I0216 13:34:19.433654 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:19 crc kubenswrapper[4812]: E0216 13:34:19.433863 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:19.933831242 +0000 UTC m=+148.998161953 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:19 crc kubenswrapper[4812]: I0216 13:34:19.434251 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:19 crc kubenswrapper[4812]: E0216 13:34:19.434611 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:19.934601846 +0000 UTC m=+148.998932547 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:19 crc kubenswrapper[4812]: I0216 13:34:19.484716 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gb9bh" Feb 16 13:34:19 crc kubenswrapper[4812]: I0216 13:34:19.535772 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:19 crc kubenswrapper[4812]: E0216 13:34:19.536220 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:20.03620151 +0000 UTC m=+149.100532211 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:19 crc kubenswrapper[4812]: I0216 13:34:19.571450 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dqt5h" event={"ID":"d34aa26a-9b3b-463d-bea6-be2d12b5854c","Type":"ContainerStarted","Data":"833ecc6b2c0265b4a452dd58ffde289d48f56c2e063ef40ec26c4c910c24d704"} Feb 16 13:34:19 crc kubenswrapper[4812]: I0216 13:34:19.575006 4812 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kc7dg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Feb 16 13:34:19 crc kubenswrapper[4812]: I0216 13:34:19.575065 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-kc7dg" podUID="7147a2f9-6f8c-4fa5-b6da-a6a67a53e231" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Feb 16 13:34:19 crc kubenswrapper[4812]: I0216 13:34:19.586806 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gb9bh" Feb 16 13:34:19 crc kubenswrapper[4812]: I0216 13:34:19.606428 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2hqkd" Feb 16 13:34:19 crc kubenswrapper[4812]: I0216 13:34:19.637541 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:19 crc kubenswrapper[4812]: E0216 13:34:19.638993 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:20.138974071 +0000 UTC m=+149.203304782 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:19 crc kubenswrapper[4812]: I0216 13:34:19.740621 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:19 crc kubenswrapper[4812]: E0216 13:34:19.740842 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:20.240810073 +0000 UTC m=+149.305140784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:19 crc kubenswrapper[4812]: I0216 13:34:19.740899 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:19 crc kubenswrapper[4812]: E0216 13:34:19.741542 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:20.241531845 +0000 UTC m=+149.305862546 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:19 crc kubenswrapper[4812]: E0216 13:34:19.842515 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:20.34249874 +0000 UTC m=+149.406829441 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:19 crc kubenswrapper[4812]: I0216 13:34:19.842436 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:19 crc kubenswrapper[4812]: I0216 13:34:19.842780 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:19 crc kubenswrapper[4812]: E0216 13:34:19.843054 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:20.343045727 +0000 UTC m=+149.407376428 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:19 crc kubenswrapper[4812]: I0216 13:34:19.926668 4812 patch_prober.go:28] interesting pod/router-default-5444994796-b7psd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 13:34:19 crc kubenswrapper[4812]: [-]has-synced failed: reason withheld Feb 16 13:34:19 crc kubenswrapper[4812]: [+]process-running ok Feb 16 13:34:19 crc kubenswrapper[4812]: healthz check failed Feb 16 13:34:19 crc kubenswrapper[4812]: I0216 13:34:19.926725 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b7psd" podUID="5bd1b4d8-80f4-4044-891b-a5e3450a0f48" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 13:34:19 crc kubenswrapper[4812]: I0216 13:34:19.945584 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-16 13:29:18 +0000 UTC, rotation deadline is 2027-01-07 15:47:48.642820509 +0000 UTC Feb 16 13:34:19 crc kubenswrapper[4812]: I0216 13:34:19.945634 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7802h13m28.697189687s for next certificate rotation Feb 16 13:34:19 crc kubenswrapper[4812]: I0216 13:34:19.945988 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:19 crc kubenswrapper[4812]: E0216 13:34:19.946162 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:20.446142658 +0000 UTC m=+149.510473369 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:19 crc kubenswrapper[4812]: I0216 13:34:19.946384 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:19 crc kubenswrapper[4812]: E0216 13:34:19.946744 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:20.446729016 +0000 UTC m=+149.511059717 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:19 crc kubenswrapper[4812]: I0216 13:34:19.999590 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ttfl" Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.055552 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:20 crc kubenswrapper[4812]: E0216 13:34:20.056010 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:20.555991369 +0000 UTC m=+149.620322070 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.158318 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:20 crc kubenswrapper[4812]: E0216 13:34:20.159196 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:20.659181723 +0000 UTC m=+149.723512424 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.259500 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:20 crc kubenswrapper[4812]: E0216 13:34:20.259708 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:20.759678133 +0000 UTC m=+149.824008844 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.259851 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:20 crc kubenswrapper[4812]: E0216 13:34:20.260149 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:20.760138447 +0000 UTC m=+149.824469138 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.360889 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:20 crc kubenswrapper[4812]: E0216 13:34:20.361084 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:20.86105493 +0000 UTC m=+149.925385631 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.361137 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:20 crc kubenswrapper[4812]: E0216 13:34:20.361561 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:20.861544926 +0000 UTC m=+149.925875697 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.462346 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:20 crc kubenswrapper[4812]: E0216 13:34:20.462822 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:20.962803258 +0000 UTC m=+150.027133959 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.563949 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:20 crc kubenswrapper[4812]: E0216 13:34:20.564286 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:21.064271959 +0000 UTC m=+150.128602660 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.572433 4812 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-6n6fc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.572500 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6n6fc" podUID="057b7c44-3b11-4a03-8325-0b3819b55f6f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.577767 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dqt5h" event={"ID":"d34aa26a-9b3b-463d-bea6-be2d12b5854c","Type":"ContainerStarted","Data":"5802d4e634ee5afd015ea1b0d1225497cfdff8187b45ef1f35614ed684a132d2"} Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.660658 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gfhfv"] Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.664340 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gfhfv" Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.669612 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:20 crc kubenswrapper[4812]: E0216 13:34:20.670697 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:21.170683253 +0000 UTC m=+150.235013954 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.678730 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.690695 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gfhfv"] Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.771174 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzmm7\" (UniqueName: \"kubernetes.io/projected/567e2fcc-e342-41e9-a406-4758f7c5551e-kube-api-access-rzmm7\") pod \"certified-operators-gfhfv\" (UID: \"567e2fcc-e342-41e9-a406-4758f7c5551e\") " pod="openshift-marketplace/certified-operators-gfhfv" Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.771217 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/567e2fcc-e342-41e9-a406-4758f7c5551e-utilities\") pod \"certified-operators-gfhfv\" (UID: \"567e2fcc-e342-41e9-a406-4758f7c5551e\") " pod="openshift-marketplace/certified-operators-gfhfv" Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.771247 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.771270 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.771304 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.771332 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/567e2fcc-e342-41e9-a406-4758f7c5551e-catalog-content\") pod \"certified-operators-gfhfv\" (UID: \"567e2fcc-e342-41e9-a406-4758f7c5551e\") " pod="openshift-marketplace/certified-operators-gfhfv" Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.771354 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.771376 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:34:20 crc kubenswrapper[4812]: E0216 13:34:20.772182 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:21.272153293 +0000 UTC m=+150.336483994 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.774360 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.779031 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.779131 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.779936 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.792712 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.851896 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-t9zmh"] Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.852831 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t9zmh" Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.855833 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.872353 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:20 crc kubenswrapper[4812]: E0216 13:34:20.872566 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:21.37253161 +0000 UTC m=+150.436862311 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.872666 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzmm7\" (UniqueName: \"kubernetes.io/projected/567e2fcc-e342-41e9-a406-4758f7c5551e-kube-api-access-rzmm7\") pod \"certified-operators-gfhfv\" (UID: \"567e2fcc-e342-41e9-a406-4758f7c5551e\") " pod="openshift-marketplace/certified-operators-gfhfv" Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.872712 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/567e2fcc-e342-41e9-a406-4758f7c5551e-utilities\") pod \"certified-operators-gfhfv\" (UID: \"567e2fcc-e342-41e9-a406-4758f7c5551e\") " pod="openshift-marketplace/certified-operators-gfhfv" Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.872770 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.872807 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/567e2fcc-e342-41e9-a406-4758f7c5551e-catalog-content\") pod \"certified-operators-gfhfv\" (UID: \"567e2fcc-e342-41e9-a406-4758f7c5551e\") " pod="openshift-marketplace/certified-operators-gfhfv" Feb 16 13:34:20 crc kubenswrapper[4812]: E0216 13:34:20.873112 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:21.373100297 +0000 UTC m=+150.437430998 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.873176 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/567e2fcc-e342-41e9-a406-4758f7c5551e-utilities\") pod \"certified-operators-gfhfv\" (UID: \"567e2fcc-e342-41e9-a406-4758f7c5551e\") " pod="openshift-marketplace/certified-operators-gfhfv" Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.873486 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/567e2fcc-e342-41e9-a406-4758f7c5551e-catalog-content\") pod \"certified-operators-gfhfv\" (UID: \"567e2fcc-e342-41e9-a406-4758f7c5551e\") " pod="openshift-marketplace/certified-operators-gfhfv" Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.878174 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t9zmh"] Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.905733 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.908666 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.918552 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzmm7\" (UniqueName: \"kubernetes.io/projected/567e2fcc-e342-41e9-a406-4758f7c5551e-kube-api-access-rzmm7\") pod \"certified-operators-gfhfv\" (UID: \"567e2fcc-e342-41e9-a406-4758f7c5551e\") " pod="openshift-marketplace/certified-operators-gfhfv" Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.921618 4812 patch_prober.go:28] interesting pod/router-default-5444994796-b7psd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 13:34:20 crc kubenswrapper[4812]: [-]has-synced failed: reason withheld Feb 16 13:34:20 crc kubenswrapper[4812]: [+]process-running ok Feb 16 13:34:20 crc kubenswrapper[4812]: healthz check failed Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.921675 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b7psd" podUID="5bd1b4d8-80f4-4044-891b-a5e3450a0f48" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.973788 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:20 crc kubenswrapper[4812]: E0216 13:34:20.974138 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:21.474107754 +0000 UTC m=+150.538438455 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.974336 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvng8\" (UniqueName: \"kubernetes.io/projected/2984d252-d29e-49b5-87ed-9ce7d19edc6d-kube-api-access-kvng8\") pod \"community-operators-t9zmh\" (UID: \"2984d252-d29e-49b5-87ed-9ce7d19edc6d\") " pod="openshift-marketplace/community-operators-t9zmh" Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.974525 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2984d252-d29e-49b5-87ed-9ce7d19edc6d-catalog-content\") pod \"community-operators-t9zmh\" (UID: \"2984d252-d29e-49b5-87ed-9ce7d19edc6d\") " pod="openshift-marketplace/community-operators-t9zmh" Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.974636 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.974687 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2984d252-d29e-49b5-87ed-9ce7d19edc6d-utilities\") pod \"community-operators-t9zmh\" (UID: \"2984d252-d29e-49b5-87ed-9ce7d19edc6d\") " pod="openshift-marketplace/community-operators-t9zmh" Feb 16 13:34:20 crc kubenswrapper[4812]: E0216 13:34:20.975647 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:21.475632521 +0000 UTC m=+150.539963232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:20 crc kubenswrapper[4812]: I0216 13:34:20.996719 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gfhfv" Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.073120 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cxh5t"] Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.074252 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cxh5t" Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.096623 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.096981 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvng8\" (UniqueName: \"kubernetes.io/projected/2984d252-d29e-49b5-87ed-9ce7d19edc6d-kube-api-access-kvng8\") pod \"community-operators-t9zmh\" (UID: \"2984d252-d29e-49b5-87ed-9ce7d19edc6d\") " pod="openshift-marketplace/community-operators-t9zmh" Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.097021 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2984d252-d29e-49b5-87ed-9ce7d19edc6d-catalog-content\") pod \"community-operators-t9zmh\" (UID: \"2984d252-d29e-49b5-87ed-9ce7d19edc6d\") " pod="openshift-marketplace/community-operators-t9zmh" Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.097111 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2984d252-d29e-49b5-87ed-9ce7d19edc6d-utilities\") pod \"community-operators-t9zmh\" (UID: \"2984d252-d29e-49b5-87ed-9ce7d19edc6d\") " pod="openshift-marketplace/community-operators-t9zmh" Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.097974 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2984d252-d29e-49b5-87ed-9ce7d19edc6d-utilities\") pod \"community-operators-t9zmh\" (UID: \"2984d252-d29e-49b5-87ed-9ce7d19edc6d\") " pod="openshift-marketplace/community-operators-t9zmh" Feb 16 13:34:21 crc kubenswrapper[4812]: E0216 13:34:21.098059 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:21.598041481 +0000 UTC m=+150.662372182 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.098592 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2984d252-d29e-49b5-87ed-9ce7d19edc6d-catalog-content\") pod \"community-operators-t9zmh\" (UID: \"2984d252-d29e-49b5-87ed-9ce7d19edc6d\") " pod="openshift-marketplace/community-operators-t9zmh" Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.132368 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cxh5t"] Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.162155 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvng8\" (UniqueName: \"kubernetes.io/projected/2984d252-d29e-49b5-87ed-9ce7d19edc6d-kube-api-access-kvng8\") pod \"community-operators-t9zmh\" (UID: \"2984d252-d29e-49b5-87ed-9ce7d19edc6d\") " pod="openshift-marketplace/community-operators-t9zmh" Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.177934 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t9zmh" Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.198195 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j5jt\" (UniqueName: \"kubernetes.io/projected/f4e6d69a-43ea-4b9b-a150-640b86bfbf42-kube-api-access-5j5jt\") pod \"certified-operators-cxh5t\" (UID: \"f4e6d69a-43ea-4b9b-a150-640b86bfbf42\") " pod="openshift-marketplace/certified-operators-cxh5t" Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.198242 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.198282 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4e6d69a-43ea-4b9b-a150-640b86bfbf42-utilities\") pod \"certified-operators-cxh5t\" (UID: \"f4e6d69a-43ea-4b9b-a150-640b86bfbf42\") " pod="openshift-marketplace/certified-operators-cxh5t" Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.198303 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4e6d69a-43ea-4b9b-a150-640b86bfbf42-catalog-content\") pod \"certified-operators-cxh5t\" (UID: \"f4e6d69a-43ea-4b9b-a150-640b86bfbf42\") " pod="openshift-marketplace/certified-operators-cxh5t" Feb 16 13:34:21 crc kubenswrapper[4812]: E0216 13:34:21.198663 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:21.698651895 +0000 UTC m=+150.762982596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.249628 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zt4tm"] Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.250574 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zt4tm" Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.267967 4812 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.299388 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.299646 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4e6d69a-43ea-4b9b-a150-640b86bfbf42-utilities\") pod \"certified-operators-cxh5t\" (UID: \"f4e6d69a-43ea-4b9b-a150-640b86bfbf42\") " pod="openshift-marketplace/certified-operators-cxh5t" Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.299673 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4e6d69a-43ea-4b9b-a150-640b86bfbf42-catalog-content\") pod \"certified-operators-cxh5t\" (UID: \"f4e6d69a-43ea-4b9b-a150-640b86bfbf42\") " pod="openshift-marketplace/certified-operators-cxh5t" Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.299727 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j5jt\" (UniqueName: \"kubernetes.io/projected/f4e6d69a-43ea-4b9b-a150-640b86bfbf42-kube-api-access-5j5jt\") pod \"certified-operators-cxh5t\" (UID: \"f4e6d69a-43ea-4b9b-a150-640b86bfbf42\") " pod="openshift-marketplace/certified-operators-cxh5t" Feb 16 13:34:21 crc kubenswrapper[4812]: E0216 13:34:21.300026 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:21.800012032 +0000 UTC m=+150.864342733 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.300319 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4e6d69a-43ea-4b9b-a150-640b86bfbf42-utilities\") pod \"certified-operators-cxh5t\" (UID: \"f4e6d69a-43ea-4b9b-a150-640b86bfbf42\") " pod="openshift-marketplace/certified-operators-cxh5t" Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.300611 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4e6d69a-43ea-4b9b-a150-640b86bfbf42-catalog-content\") pod \"certified-operators-cxh5t\" (UID: \"f4e6d69a-43ea-4b9b-a150-640b86bfbf42\") " pod="openshift-marketplace/certified-operators-cxh5t" Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.303411 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zt4tm"] Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.321638 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j5jt\" (UniqueName: \"kubernetes.io/projected/f4e6d69a-43ea-4b9b-a150-640b86bfbf42-kube-api-access-5j5jt\") pod \"certified-operators-cxh5t\" (UID: \"f4e6d69a-43ea-4b9b-a150-640b86bfbf42\") " pod="openshift-marketplace/certified-operators-cxh5t" Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.401496 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v9kv\" (UniqueName: \"kubernetes.io/projected/c25320a0-e0f3-40ae-b953-e249556bc4f6-kube-api-access-9v9kv\") pod \"community-operators-zt4tm\" (UID: \"c25320a0-e0f3-40ae-b953-e249556bc4f6\") " pod="openshift-marketplace/community-operators-zt4tm" Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.401530 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c25320a0-e0f3-40ae-b953-e249556bc4f6-catalog-content\") pod \"community-operators-zt4tm\" (UID: \"c25320a0-e0f3-40ae-b953-e249556bc4f6\") " pod="openshift-marketplace/community-operators-zt4tm" Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.401559 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c25320a0-e0f3-40ae-b953-e249556bc4f6-utilities\") pod \"community-operators-zt4tm\" (UID: \"c25320a0-e0f3-40ae-b953-e249556bc4f6\") " pod="openshift-marketplace/community-operators-zt4tm" Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.401595 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:21 crc kubenswrapper[4812]: E0216 13:34:21.401857 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:21.901845784 +0000 UTC m=+150.966176485 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.432803 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cxh5t" Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.502084 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.502277 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c25320a0-e0f3-40ae-b953-e249556bc4f6-utilities\") pod \"community-operators-zt4tm\" (UID: \"c25320a0-e0f3-40ae-b953-e249556bc4f6\") " pod="openshift-marketplace/community-operators-zt4tm" Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.502393 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v9kv\" (UniqueName: \"kubernetes.io/projected/c25320a0-e0f3-40ae-b953-e249556bc4f6-kube-api-access-9v9kv\") pod \"community-operators-zt4tm\" (UID: \"c25320a0-e0f3-40ae-b953-e249556bc4f6\") " pod="openshift-marketplace/community-operators-zt4tm" Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.502434 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c25320a0-e0f3-40ae-b953-e249556bc4f6-catalog-content\") pod \"community-operators-zt4tm\" (UID: \"c25320a0-e0f3-40ae-b953-e249556bc4f6\") " pod="openshift-marketplace/community-operators-zt4tm" Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.502886 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c25320a0-e0f3-40ae-b953-e249556bc4f6-catalog-content\") pod \"community-operators-zt4tm\" (UID: \"c25320a0-e0f3-40ae-b953-e249556bc4f6\") " pod="openshift-marketplace/community-operators-zt4tm" Feb 16 13:34:21 crc kubenswrapper[4812]: E0216 13:34:21.502976 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:22.002961433 +0000 UTC m=+151.067292134 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.503194 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c25320a0-e0f3-40ae-b953-e249556bc4f6-utilities\") pod \"community-operators-zt4tm\" (UID: \"c25320a0-e0f3-40ae-b953-e249556bc4f6\") " pod="openshift-marketplace/community-operators-zt4tm" Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.565459 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v9kv\" (UniqueName: \"kubernetes.io/projected/c25320a0-e0f3-40ae-b953-e249556bc4f6-kube-api-access-9v9kv\") pod \"community-operators-zt4tm\" (UID: \"c25320a0-e0f3-40ae-b953-e249556bc4f6\") " pod="openshift-marketplace/community-operators-zt4tm" Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.600040 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zt4tm" Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.604775 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:21 crc kubenswrapper[4812]: E0216 13:34:21.605110 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:22.105098165 +0000 UTC m=+151.169428866 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.637913 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dqt5h" event={"ID":"d34aa26a-9b3b-463d-bea6-be2d12b5854c","Type":"ContainerStarted","Data":"9332fa28a91da5037125a55235e2be2d17012d441e58d567a0d4e5f58e73ea1f"} Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.639918 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"0b09d4721315df20d813bc3e75617950b0e3c0b96d6b32cc7502c21afc17238d"} Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.710965 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:21 crc kubenswrapper[4812]: E0216 13:34:21.711418 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:22.211395635 +0000 UTC m=+151.275726336 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.748364 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-dqt5h" podStartSLOduration=10.748346012 podStartE2EDuration="10.748346012s" podCreationTimestamp="2026-02-16 13:34:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:21.718765654 +0000 UTC m=+150.783096365" watchObservedRunningTime="2026-02-16 13:34:21.748346012 +0000 UTC m=+150.812676713" Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.812927 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:21 crc kubenswrapper[4812]: E0216 13:34:21.813238 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:22.313224547 +0000 UTC m=+151.377555248 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:21 crc kubenswrapper[4812]: W0216 13:34:21.833048 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-507e1a3623ac859278966c2b233195af14703f7f5023dce8c2a9fea0a2b9566a WatchSource:0}: Error finding container 507e1a3623ac859278966c2b233195af14703f7f5023dce8c2a9fea0a2b9566a: Status 404 returned error can't find the container with id 507e1a3623ac859278966c2b233195af14703f7f5023dce8c2a9fea0a2b9566a Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.876134 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gfhfv"] Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.904636 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t9zmh"] Feb 16 13:34:21 crc kubenswrapper[4812]: I0216 13:34:21.915330 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:21 crc kubenswrapper[4812]: E0216 13:34:21.915713 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 13:34:22.415699058 +0000 UTC m=+151.480029759 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:21.995136 4812 patch_prober.go:28] interesting pod/router-default-5444994796-b7psd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 13:34:22 crc kubenswrapper[4812]: [-]has-synced failed: reason withheld Feb 16 13:34:22 crc kubenswrapper[4812]: [+]process-running ok Feb 16 13:34:22 crc kubenswrapper[4812]: healthz check failed Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:21.995213 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b7psd" podUID="5bd1b4d8-80f4-4044-891b-a5e3450a0f48" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.030705 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:22 crc kubenswrapper[4812]: E0216 13:34:22.068304 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 13:34:22.568285366 +0000 UTC m=+151.632616067 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2f89v" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.078629 4812 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-16T13:34:21.267988358Z","Handler":null,"Name":""} Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.122914 4812 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.123145 4812 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.177206 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.192050 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.195651 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cxh5t"] Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.278272 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.288494 4812 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.288543 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.319215 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2f89v\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.379326 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zt4tm"] Feb 16 13:34:22 crc kubenswrapper[4812]: W0216 13:34:22.389641 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc25320a0_e0f3_40ae_b953_e249556bc4f6.slice/crio-1f936fb7a3ace551d9e18a255de6c23597b073c922952265438d5afb9cb40541 WatchSource:0}: Error finding container 1f936fb7a3ace551d9e18a255de6c23597b073c922952265438d5afb9cb40541: Status 404 returned error can't find the container with id 1f936fb7a3ace551d9e18a255de6c23597b073c922952265438d5afb9cb40541 Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.466924 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.468091 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.472362 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.473757 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.482523 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.516250 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.583898 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0bd26c64-69c9-4dcb-bd15-8f3a7f156cac-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0bd26c64-69c9-4dcb-bd15-8f3a7f156cac\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.583937 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0bd26c64-69c9-4dcb-bd15-8f3a7f156cac-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0bd26c64-69c9-4dcb-bd15-8f3a7f156cac\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.648799 4812 generic.go:334] "Generic (PLEG): container finished" podID="fca937fd-eef1-4f91-b825-18d5429526a9" containerID="0358c19526fd9d5115b8ee38021054badf34997819a4529cb16cd73276d636d6" exitCode=0 Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.648910 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-vq6f4" event={"ID":"fca937fd-eef1-4f91-b825-18d5429526a9","Type":"ContainerDied","Data":"0358c19526fd9d5115b8ee38021054badf34997819a4529cb16cd73276d636d6"} Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.650095 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"61300ad8d50b8fac792184fd882f7097f4208671c6627a6da5fb6c457ffabab9"} Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.650159 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"507e1a3623ac859278966c2b233195af14703f7f5023dce8c2a9fea0a2b9566a"} Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.657145 4812 generic.go:334] "Generic (PLEG): container finished" podID="c25320a0-e0f3-40ae-b953-e249556bc4f6" containerID="f2032441992b3698e54b610589005cacc36456cb0ab4bff4e34711e25b6b7cf0" exitCode=0 Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.657298 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zt4tm" event={"ID":"c25320a0-e0f3-40ae-b953-e249556bc4f6","Type":"ContainerDied","Data":"f2032441992b3698e54b610589005cacc36456cb0ab4bff4e34711e25b6b7cf0"} Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.657325 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zt4tm" event={"ID":"c25320a0-e0f3-40ae-b953-e249556bc4f6","Type":"ContainerStarted","Data":"1f936fb7a3ace551d9e18a255de6c23597b073c922952265438d5afb9cb40541"} Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.668980 4812 generic.go:334] "Generic (PLEG): container finished" podID="2984d252-d29e-49b5-87ed-9ce7d19edc6d" containerID="a0d9f7ca67851f90327c445631cdb6421d57b0c175a67e8823d1ff97f783fa73" exitCode=0 Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.669133 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t9zmh" event={"ID":"2984d252-d29e-49b5-87ed-9ce7d19edc6d","Type":"ContainerDied","Data":"a0d9f7ca67851f90327c445631cdb6421d57b0c175a67e8823d1ff97f783fa73"} Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.669168 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t9zmh" event={"ID":"2984d252-d29e-49b5-87ed-9ce7d19edc6d","Type":"ContainerStarted","Data":"0f0c5ee9c2deb094d00298afed04360761e56debce67357e8a92a4059eeddc94"} Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.669278 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.672774 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"8cde71ecb179c7aa9a228587861b5ed520317bca4127fb9e7d85a99269432a77"} Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.672850 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"087a6a09b50755d03408c722e616db43f90489fabae60d564eb422a53637ee24"} Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.676658 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"e85731071a2d7bd9d0dcd0be03e7851b25523ffa05cf32acd9eb50fca25998b2"} Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.676733 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.679974 4812 generic.go:334] "Generic (PLEG): container finished" podID="567e2fcc-e342-41e9-a406-4758f7c5551e" containerID="c3e448bd6746dd721c7c10de8c0cc104f4f250b3b1b2190af14861eb650cdb84" exitCode=0 Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.680042 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gfhfv" event={"ID":"567e2fcc-e342-41e9-a406-4758f7c5551e","Type":"ContainerDied","Data":"c3e448bd6746dd721c7c10de8c0cc104f4f250b3b1b2190af14861eb650cdb84"} Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.680072 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gfhfv" event={"ID":"567e2fcc-e342-41e9-a406-4758f7c5551e","Type":"ContainerStarted","Data":"8389cdbf0a0a10f94ee1a07f13cf7eb695b55db174d6b85f28781e6e8f9eaaf2"} Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.685587 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0bd26c64-69c9-4dcb-bd15-8f3a7f156cac-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0bd26c64-69c9-4dcb-bd15-8f3a7f156cac\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.685628 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0bd26c64-69c9-4dcb-bd15-8f3a7f156cac-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0bd26c64-69c9-4dcb-bd15-8f3a7f156cac\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.685678 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0bd26c64-69c9-4dcb-bd15-8f3a7f156cac-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0bd26c64-69c9-4dcb-bd15-8f3a7f156cac\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.686559 4812 generic.go:334] "Generic (PLEG): container finished" podID="f4e6d69a-43ea-4b9b-a150-640b86bfbf42" containerID="7dee5d7a7242e87467fc0dded2176f43137138a0fa79b07208d4c85bf8bfec68" exitCode=0 Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.686753 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxh5t" event={"ID":"f4e6d69a-43ea-4b9b-a150-640b86bfbf42","Type":"ContainerDied","Data":"7dee5d7a7242e87467fc0dded2176f43137138a0fa79b07208d4c85bf8bfec68"} Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.687102 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxh5t" event={"ID":"f4e6d69a-43ea-4b9b-a150-640b86bfbf42","Type":"ContainerStarted","Data":"12cd9cf36ff667dd43ba7267f2e24d663c34d0b0f199ef4cde2a7a9b5e8ecdf6"} Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.711388 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0bd26c64-69c9-4dcb-bd15-8f3a7f156cac-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0bd26c64-69c9-4dcb-bd15-8f3a7f156cac\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.720092 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2f89v"] Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.801484 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.802610 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.804571 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.807506 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.815177 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.822874 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cqhcl"] Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.824179 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cqhcl" Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.826069 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.831433 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cqhcl"] Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.858882 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.900435 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/108c709d-3892-437c-8389-eacf83464173-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"108c709d-3892-437c-8389-eacf83464173\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.900576 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95c4d\" (UniqueName: \"kubernetes.io/projected/a297c2d9-88a8-4019-94f5-c1f5498bee86-kube-api-access-95c4d\") pod \"redhat-marketplace-cqhcl\" (UID: \"a297c2d9-88a8-4019-94f5-c1f5498bee86\") " pod="openshift-marketplace/redhat-marketplace-cqhcl" Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.900625 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/108c709d-3892-437c-8389-eacf83464173-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"108c709d-3892-437c-8389-eacf83464173\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.900692 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a297c2d9-88a8-4019-94f5-c1f5498bee86-catalog-content\") pod \"redhat-marketplace-cqhcl\" (UID: \"a297c2d9-88a8-4019-94f5-c1f5498bee86\") " pod="openshift-marketplace/redhat-marketplace-cqhcl" Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.900784 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a297c2d9-88a8-4019-94f5-c1f5498bee86-utilities\") pod \"redhat-marketplace-cqhcl\" (UID: \"a297c2d9-88a8-4019-94f5-c1f5498bee86\") " pod="openshift-marketplace/redhat-marketplace-cqhcl" Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.921570 4812 patch_prober.go:28] interesting pod/router-default-5444994796-b7psd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 13:34:22 crc kubenswrapper[4812]: [-]has-synced failed: reason withheld Feb 16 13:34:22 crc kubenswrapper[4812]: [+]process-running ok Feb 16 13:34:22 crc kubenswrapper[4812]: healthz check failed Feb 16 13:34:22 crc kubenswrapper[4812]: I0216 13:34:22.921637 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b7psd" podUID="5bd1b4d8-80f4-4044-891b-a5e3450a0f48" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.003713 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a297c2d9-88a8-4019-94f5-c1f5498bee86-utilities\") pod \"redhat-marketplace-cqhcl\" (UID: \"a297c2d9-88a8-4019-94f5-c1f5498bee86\") " pod="openshift-marketplace/redhat-marketplace-cqhcl" Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.004534 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/108c709d-3892-437c-8389-eacf83464173-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"108c709d-3892-437c-8389-eacf83464173\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.004609 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95c4d\" (UniqueName: \"kubernetes.io/projected/a297c2d9-88a8-4019-94f5-c1f5498bee86-kube-api-access-95c4d\") pod \"redhat-marketplace-cqhcl\" (UID: \"a297c2d9-88a8-4019-94f5-c1f5498bee86\") " pod="openshift-marketplace/redhat-marketplace-cqhcl" Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.004667 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/108c709d-3892-437c-8389-eacf83464173-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"108c709d-3892-437c-8389-eacf83464173\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.004747 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a297c2d9-88a8-4019-94f5-c1f5498bee86-catalog-content\") pod \"redhat-marketplace-cqhcl\" (UID: \"a297c2d9-88a8-4019-94f5-c1f5498bee86\") " pod="openshift-marketplace/redhat-marketplace-cqhcl" Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.005729 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a297c2d9-88a8-4019-94f5-c1f5498bee86-catalog-content\") pod \"redhat-marketplace-cqhcl\" (UID: \"a297c2d9-88a8-4019-94f5-c1f5498bee86\") " pod="openshift-marketplace/redhat-marketplace-cqhcl" Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.006044 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a297c2d9-88a8-4019-94f5-c1f5498bee86-utilities\") pod \"redhat-marketplace-cqhcl\" (UID: \"a297c2d9-88a8-4019-94f5-c1f5498bee86\") " pod="openshift-marketplace/redhat-marketplace-cqhcl" Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.006513 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/108c709d-3892-437c-8389-eacf83464173-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"108c709d-3892-437c-8389-eacf83464173\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.025489 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/108c709d-3892-437c-8389-eacf83464173-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"108c709d-3892-437c-8389-eacf83464173\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.026042 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95c4d\" (UniqueName: \"kubernetes.io/projected/a297c2d9-88a8-4019-94f5-c1f5498bee86-kube-api-access-95c4d\") pod \"redhat-marketplace-cqhcl\" (UID: \"a297c2d9-88a8-4019-94f5-c1f5498bee86\") " pod="openshift-marketplace/redhat-marketplace-cqhcl" Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.109696 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.146337 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-72lrh" Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.174945 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.189292 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-tpgqc" Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.189345 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-tpgqc" Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.192264 4812 patch_prober.go:28] interesting pod/console-f9d7485db-tpgqc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.192314 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-tpgqc" podUID="d8f24d90-54d8-4344-8140-c9fa919b456a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.196649 4812 patch_prober.go:28] interesting pod/downloads-7954f5f757-sv88f container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.196690 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-sv88f" podUID="c221ee5f-91c7-4ca7-9567-55cd7bd72beb" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.196864 4812 patch_prober.go:28] interesting pod/downloads-7954f5f757-sv88f container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.196885 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sv88f" podUID="c221ee5f-91c7-4ca7-9567-55cd7bd72beb" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.216385 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lbfqw"] Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.217717 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lbfqw" Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.220683 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cqhcl" Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.231335 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lbfqw"] Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.309196 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b3c17cd-2607-4379-9ff0-5ad26cfca6ce-catalog-content\") pod \"redhat-marketplace-lbfqw\" (UID: \"1b3c17cd-2607-4379-9ff0-5ad26cfca6ce\") " pod="openshift-marketplace/redhat-marketplace-lbfqw" Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.309496 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87fbb\" (UniqueName: \"kubernetes.io/projected/1b3c17cd-2607-4379-9ff0-5ad26cfca6ce-kube-api-access-87fbb\") pod \"redhat-marketplace-lbfqw\" (UID: \"1b3c17cd-2607-4379-9ff0-5ad26cfca6ce\") " pod="openshift-marketplace/redhat-marketplace-lbfqw" Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.309723 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b3c17cd-2607-4379-9ff0-5ad26cfca6ce-utilities\") pod \"redhat-marketplace-lbfqw\" (UID: \"1b3c17cd-2607-4379-9ff0-5ad26cfca6ce\") " pod="openshift-marketplace/redhat-marketplace-lbfqw" Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.411363 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b3c17cd-2607-4379-9ff0-5ad26cfca6ce-catalog-content\") pod \"redhat-marketplace-lbfqw\" (UID: \"1b3c17cd-2607-4379-9ff0-5ad26cfca6ce\") " pod="openshift-marketplace/redhat-marketplace-lbfqw" Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.411418 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87fbb\" (UniqueName: \"kubernetes.io/projected/1b3c17cd-2607-4379-9ff0-5ad26cfca6ce-kube-api-access-87fbb\") pod \"redhat-marketplace-lbfqw\" (UID: \"1b3c17cd-2607-4379-9ff0-5ad26cfca6ce\") " pod="openshift-marketplace/redhat-marketplace-lbfqw" Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.411673 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b3c17cd-2607-4379-9ff0-5ad26cfca6ce-utilities\") pod \"redhat-marketplace-lbfqw\" (UID: \"1b3c17cd-2607-4379-9ff0-5ad26cfca6ce\") " pod="openshift-marketplace/redhat-marketplace-lbfqw" Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.412082 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b3c17cd-2607-4379-9ff0-5ad26cfca6ce-catalog-content\") pod \"redhat-marketplace-lbfqw\" (UID: \"1b3c17cd-2607-4379-9ff0-5ad26cfca6ce\") " pod="openshift-marketplace/redhat-marketplace-lbfqw" Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.412105 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b3c17cd-2607-4379-9ff0-5ad26cfca6ce-utilities\") pod \"redhat-marketplace-lbfqw\" (UID: \"1b3c17cd-2607-4379-9ff0-5ad26cfca6ce\") " pod="openshift-marketplace/redhat-marketplace-lbfqw" Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.435680 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87fbb\" (UniqueName: \"kubernetes.io/projected/1b3c17cd-2607-4379-9ff0-5ad26cfca6ce-kube-api-access-87fbb\") pod \"redhat-marketplace-lbfqw\" (UID: \"1b3c17cd-2607-4379-9ff0-5ad26cfca6ce\") " pod="openshift-marketplace/redhat-marketplace-lbfqw" Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.453824 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.453906 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.471073 4812 patch_prober.go:28] interesting pod/apiserver-76f77b778f-wc6pn container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 16 13:34:23 crc kubenswrapper[4812]: [+]log ok Feb 16 13:34:23 crc kubenswrapper[4812]: [+]etcd ok Feb 16 13:34:23 crc kubenswrapper[4812]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 16 13:34:23 crc kubenswrapper[4812]: [+]poststarthook/generic-apiserver-start-informers ok Feb 16 13:34:23 crc kubenswrapper[4812]: [+]poststarthook/max-in-flight-filter ok Feb 16 13:34:23 crc kubenswrapper[4812]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 16 13:34:23 crc kubenswrapper[4812]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 16 13:34:23 crc kubenswrapper[4812]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 16 13:34:23 crc kubenswrapper[4812]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Feb 16 13:34:23 crc kubenswrapper[4812]: [+]poststarthook/project.openshift.io-projectcache ok Feb 16 13:34:23 crc kubenswrapper[4812]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 16 13:34:23 crc kubenswrapper[4812]: [+]poststarthook/openshift.io-startinformers ok Feb 16 13:34:23 crc kubenswrapper[4812]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 16 13:34:23 crc kubenswrapper[4812]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 16 13:34:23 crc kubenswrapper[4812]: livez check failed Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.471136 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" podUID="7f4d6c63-7c73-4fae-8738-04def1b3e5e3" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.471925 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.562823 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lbfqw" Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.719845 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"108c709d-3892-437c-8389-eacf83464173","Type":"ContainerStarted","Data":"b0a4b0d83b89300755649b52e58d7bbbbef1309fb0af3feaac597dfced147d70"} Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.727568 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0bd26c64-69c9-4dcb-bd15-8f3a7f156cac","Type":"ContainerStarted","Data":"33673eae05aecef3485598fe0274ae21ae8d8f525df6e1ddd2b811308e0e3fbb"} Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.727612 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0bd26c64-69c9-4dcb-bd15-8f3a7f156cac","Type":"ContainerStarted","Data":"eee14dd60549b5eaddd922882ed3b82bb6606245e8511149b3c7356571e3b3ae"} Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.730554 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" event={"ID":"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7","Type":"ContainerStarted","Data":"87d76816501304b9e257e87bb1166d8efd9fcc4f3c20de53d92b4ce0c88fc651"} Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.730619 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" event={"ID":"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7","Type":"ContainerStarted","Data":"e5bd39cd6b902b0dc8348d4642a3ada4a57dc2b836cfc4cb97213468b0960739"} Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.759681 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=1.75965886 podStartE2EDuration="1.75965886s" podCreationTimestamp="2026-02-16 13:34:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:23.758623118 +0000 UTC m=+152.822953819" watchObservedRunningTime="2026-02-16 13:34:23.75965886 +0000 UTC m=+152.823989561" Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.794940 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cqhcl"] Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.798157 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" podStartSLOduration=130.798139805 podStartE2EDuration="2m10.798139805s" podCreationTimestamp="2026-02-16 13:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:23.78383406 +0000 UTC m=+152.848164761" watchObservedRunningTime="2026-02-16 13:34:23.798139805 +0000 UTC m=+152.862470506" Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.831792 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fjz4f"] Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.844835 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fjz4f" Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.849462 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.851569 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fjz4f"] Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.914757 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.919349 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lbfqw"] Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.919396 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-b7psd" Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.925390 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6n6fc" Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.934813 4812 patch_prober.go:28] interesting pod/router-default-5444994796-b7psd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 13:34:23 crc kubenswrapper[4812]: [-]has-synced failed: reason withheld Feb 16 13:34:23 crc kubenswrapper[4812]: [+]process-running ok Feb 16 13:34:23 crc kubenswrapper[4812]: healthz check failed Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.934854 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b7psd" podUID="5bd1b4d8-80f4-4044-891b-a5e3450a0f48" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 13:34:23 crc kubenswrapper[4812]: W0216 13:34:23.955728 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda297c2d9_88a8_4019_94f5_c1f5498bee86.slice/crio-f3903fcc4af8e2caff9ee0bfe3a6456dc94ac5a8f51e4b6855387036cc1485b2 WatchSource:0}: Error finding container f3903fcc4af8e2caff9ee0bfe3a6456dc94ac5a8f51e4b6855387036cc1485b2: Status 404 returned error can't find the container with id f3903fcc4af8e2caff9ee0bfe3a6456dc94ac5a8f51e4b6855387036cc1485b2 Feb 16 13:34:23 crc kubenswrapper[4812]: I0216 13:34:23.989224 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g8zmc" Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.025164 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1a9695b-636b-4b29-a6dd-4e0708706b74-catalog-content\") pod \"redhat-operators-fjz4f\" (UID: \"c1a9695b-636b-4b29-a6dd-4e0708706b74\") " pod="openshift-marketplace/redhat-operators-fjz4f" Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.025256 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1a9695b-636b-4b29-a6dd-4e0708706b74-utilities\") pod \"redhat-operators-fjz4f\" (UID: \"c1a9695b-636b-4b29-a6dd-4e0708706b74\") " pod="openshift-marketplace/redhat-operators-fjz4f" Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.025325 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sddfp\" (UniqueName: \"kubernetes.io/projected/c1a9695b-636b-4b29-a6dd-4e0708706b74-kube-api-access-sddfp\") pod \"redhat-operators-fjz4f\" (UID: \"c1a9695b-636b-4b29-a6dd-4e0708706b74\") " pod="openshift-marketplace/redhat-operators-fjz4f" Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.126513 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1a9695b-636b-4b29-a6dd-4e0708706b74-catalog-content\") pod \"redhat-operators-fjz4f\" (UID: \"c1a9695b-636b-4b29-a6dd-4e0708706b74\") " pod="openshift-marketplace/redhat-operators-fjz4f" Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.126605 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1a9695b-636b-4b29-a6dd-4e0708706b74-utilities\") pod \"redhat-operators-fjz4f\" (UID: \"c1a9695b-636b-4b29-a6dd-4e0708706b74\") " pod="openshift-marketplace/redhat-operators-fjz4f" Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.126646 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sddfp\" (UniqueName: \"kubernetes.io/projected/c1a9695b-636b-4b29-a6dd-4e0708706b74-kube-api-access-sddfp\") pod \"redhat-operators-fjz4f\" (UID: \"c1a9695b-636b-4b29-a6dd-4e0708706b74\") " pod="openshift-marketplace/redhat-operators-fjz4f" Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.127822 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1a9695b-636b-4b29-a6dd-4e0708706b74-catalog-content\") pod \"redhat-operators-fjz4f\" (UID: \"c1a9695b-636b-4b29-a6dd-4e0708706b74\") " pod="openshift-marketplace/redhat-operators-fjz4f" Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.129286 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1a9695b-636b-4b29-a6dd-4e0708706b74-utilities\") pod \"redhat-operators-fjz4f\" (UID: \"c1a9695b-636b-4b29-a6dd-4e0708706b74\") " pod="openshift-marketplace/redhat-operators-fjz4f" Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.166272 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sddfp\" (UniqueName: \"kubernetes.io/projected/c1a9695b-636b-4b29-a6dd-4e0708706b74-kube-api-access-sddfp\") pod \"redhat-operators-fjz4f\" (UID: \"c1a9695b-636b-4b29-a6dd-4e0708706b74\") " pod="openshift-marketplace/redhat-operators-fjz4f" Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.172316 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-vq6f4" Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.222021 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lw64z"] Feb 16 13:34:24 crc kubenswrapper[4812]: E0216 13:34:24.223305 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fca937fd-eef1-4f91-b825-18d5429526a9" containerName="collect-profiles" Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.223325 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="fca937fd-eef1-4f91-b825-18d5429526a9" containerName="collect-profiles" Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.223480 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="fca937fd-eef1-4f91-b825-18d5429526a9" containerName="collect-profiles" Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.224333 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lw64z" Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.227433 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fjz4f" Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.238904 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lw64z"] Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.278371 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-kc7dg" Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.328315 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fca937fd-eef1-4f91-b825-18d5429526a9-config-volume\") pod \"fca937fd-eef1-4f91-b825-18d5429526a9\" (UID: \"fca937fd-eef1-4f91-b825-18d5429526a9\") " Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.328398 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fca937fd-eef1-4f91-b825-18d5429526a9-secret-volume\") pod \"fca937fd-eef1-4f91-b825-18d5429526a9\" (UID: \"fca937fd-eef1-4f91-b825-18d5429526a9\") " Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.328525 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cw89n\" (UniqueName: \"kubernetes.io/projected/fca937fd-eef1-4f91-b825-18d5429526a9-kube-api-access-cw89n\") pod \"fca937fd-eef1-4f91-b825-18d5429526a9\" (UID: \"fca937fd-eef1-4f91-b825-18d5429526a9\") " Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.328697 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68168164-88dd-4c28-824f-e1702db05aea-utilities\") pod \"redhat-operators-lw64z\" (UID: \"68168164-88dd-4c28-824f-e1702db05aea\") " pod="openshift-marketplace/redhat-operators-lw64z" Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.328761 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68168164-88dd-4c28-824f-e1702db05aea-catalog-content\") pod \"redhat-operators-lw64z\" (UID: \"68168164-88dd-4c28-824f-e1702db05aea\") " pod="openshift-marketplace/redhat-operators-lw64z" Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.328785 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t7x4\" (UniqueName: \"kubernetes.io/projected/68168164-88dd-4c28-824f-e1702db05aea-kube-api-access-9t7x4\") pod \"redhat-operators-lw64z\" (UID: \"68168164-88dd-4c28-824f-e1702db05aea\") " pod="openshift-marketplace/redhat-operators-lw64z" Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.329612 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fca937fd-eef1-4f91-b825-18d5429526a9-config-volume" (OuterVolumeSpecName: "config-volume") pod "fca937fd-eef1-4f91-b825-18d5429526a9" (UID: "fca937fd-eef1-4f91-b825-18d5429526a9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.334874 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fca937fd-eef1-4f91-b825-18d5429526a9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fca937fd-eef1-4f91-b825-18d5429526a9" (UID: "fca937fd-eef1-4f91-b825-18d5429526a9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.336803 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fca937fd-eef1-4f91-b825-18d5429526a9-kube-api-access-cw89n" (OuterVolumeSpecName: "kube-api-access-cw89n") pod "fca937fd-eef1-4f91-b825-18d5429526a9" (UID: "fca937fd-eef1-4f91-b825-18d5429526a9"). InnerVolumeSpecName "kube-api-access-cw89n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.430372 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68168164-88dd-4c28-824f-e1702db05aea-utilities\") pod \"redhat-operators-lw64z\" (UID: \"68168164-88dd-4c28-824f-e1702db05aea\") " pod="openshift-marketplace/redhat-operators-lw64z" Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.430670 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68168164-88dd-4c28-824f-e1702db05aea-catalog-content\") pod \"redhat-operators-lw64z\" (UID: \"68168164-88dd-4c28-824f-e1702db05aea\") " pod="openshift-marketplace/redhat-operators-lw64z" Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.430701 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t7x4\" (UniqueName: \"kubernetes.io/projected/68168164-88dd-4c28-824f-e1702db05aea-kube-api-access-9t7x4\") pod \"redhat-operators-lw64z\" (UID: \"68168164-88dd-4c28-824f-e1702db05aea\") " pod="openshift-marketplace/redhat-operators-lw64z" Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.430766 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cw89n\" (UniqueName: \"kubernetes.io/projected/fca937fd-eef1-4f91-b825-18d5429526a9-kube-api-access-cw89n\") on node \"crc\" DevicePath \"\"" Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.430779 4812 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fca937fd-eef1-4f91-b825-18d5429526a9-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.430788 4812 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fca937fd-eef1-4f91-b825-18d5429526a9-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.431345 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68168164-88dd-4c28-824f-e1702db05aea-catalog-content\") pod \"redhat-operators-lw64z\" (UID: \"68168164-88dd-4c28-824f-e1702db05aea\") " pod="openshift-marketplace/redhat-operators-lw64z" Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.431769 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68168164-88dd-4c28-824f-e1702db05aea-utilities\") pod \"redhat-operators-lw64z\" (UID: \"68168164-88dd-4c28-824f-e1702db05aea\") " pod="openshift-marketplace/redhat-operators-lw64z" Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.450241 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t7x4\" (UniqueName: \"kubernetes.io/projected/68168164-88dd-4c28-824f-e1702db05aea-kube-api-access-9t7x4\") pod \"redhat-operators-lw64z\" (UID: \"68168164-88dd-4c28-824f-e1702db05aea\") " pod="openshift-marketplace/redhat-operators-lw64z" Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.559994 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fjz4f"] Feb 16 13:34:24 crc kubenswrapper[4812]: W0216 13:34:24.607521 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1a9695b_636b_4b29_a6dd_4e0708706b74.slice/crio-ed7912f86767084d314f6753c182757c59658d27850b2c463ee654930d9e998a WatchSource:0}: Error finding container ed7912f86767084d314f6753c182757c59658d27850b2c463ee654930d9e998a: Status 404 returned error can't find the container with id ed7912f86767084d314f6753c182757c59658d27850b2c463ee654930d9e998a Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.608380 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lw64z" Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.765478 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-vq6f4" event={"ID":"fca937fd-eef1-4f91-b825-18d5429526a9","Type":"ContainerDied","Data":"66bd4ce751022289c637e05e55ad5ed14253c8f1fbf3cdd00c403ccdbb260e23"} Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.765517 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66bd4ce751022289c637e05e55ad5ed14253c8f1fbf3cdd00c403ccdbb260e23" Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.765760 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520810-vq6f4" Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.783954 4812 generic.go:334] "Generic (PLEG): container finished" podID="108c709d-3892-437c-8389-eacf83464173" containerID="eb59638b5e66cc8ee28a2752864aabc81b19de141fdc8f2080103b2e0fa7a647" exitCode=0 Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.784028 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"108c709d-3892-437c-8389-eacf83464173","Type":"ContainerDied","Data":"eb59638b5e66cc8ee28a2752864aabc81b19de141fdc8f2080103b2e0fa7a647"} Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.790399 4812 generic.go:334] "Generic (PLEG): container finished" podID="1b3c17cd-2607-4379-9ff0-5ad26cfca6ce" containerID="196128f078b5bd69c93040bb65ba1008aef7a641f66a262c292e0c88ccbcc77f" exitCode=0 Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.790470 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbfqw" event={"ID":"1b3c17cd-2607-4379-9ff0-5ad26cfca6ce","Type":"ContainerDied","Data":"196128f078b5bd69c93040bb65ba1008aef7a641f66a262c292e0c88ccbcc77f"} Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.790534 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbfqw" event={"ID":"1b3c17cd-2607-4379-9ff0-5ad26cfca6ce","Type":"ContainerStarted","Data":"0855fd018bb4991cc65be52eb6ccf2b1ac9122095be85de7e28db9d6e46a1c0b"} Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.800758 4812 generic.go:334] "Generic (PLEG): container finished" podID="0bd26c64-69c9-4dcb-bd15-8f3a7f156cac" containerID="33673eae05aecef3485598fe0274ae21ae8d8f525df6e1ddd2b811308e0e3fbb" exitCode=0 Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.800872 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0bd26c64-69c9-4dcb-bd15-8f3a7f156cac","Type":"ContainerDied","Data":"33673eae05aecef3485598fe0274ae21ae8d8f525df6e1ddd2b811308e0e3fbb"} Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.822778 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fjz4f" event={"ID":"c1a9695b-636b-4b29-a6dd-4e0708706b74","Type":"ContainerStarted","Data":"ed7912f86767084d314f6753c182757c59658d27850b2c463ee654930d9e998a"} Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.844516 4812 generic.go:334] "Generic (PLEG): container finished" podID="a297c2d9-88a8-4019-94f5-c1f5498bee86" containerID="e50852e67a262f14f518869331f31c8296e0f369ece81f2b52a246918b26dc76" exitCode=0 Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.845613 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cqhcl" event={"ID":"a297c2d9-88a8-4019-94f5-c1f5498bee86","Type":"ContainerDied","Data":"e50852e67a262f14f518869331f31c8296e0f369ece81f2b52a246918b26dc76"} Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.845716 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.845749 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cqhcl" event={"ID":"a297c2d9-88a8-4019-94f5-c1f5498bee86","Type":"ContainerStarted","Data":"f3903fcc4af8e2caff9ee0bfe3a6456dc94ac5a8f51e4b6855387036cc1485b2"} Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.922816 4812 patch_prober.go:28] interesting pod/router-default-5444994796-b7psd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 13:34:24 crc kubenswrapper[4812]: [-]has-synced failed: reason withheld Feb 16 13:34:24 crc kubenswrapper[4812]: [+]process-running ok Feb 16 13:34:24 crc kubenswrapper[4812]: healthz check failed Feb 16 13:34:24 crc kubenswrapper[4812]: I0216 13:34:24.922875 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b7psd" podUID="5bd1b4d8-80f4-4044-891b-a5e3450a0f48" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 13:34:25 crc kubenswrapper[4812]: I0216 13:34:25.347391 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lw64z"] Feb 16 13:34:25 crc kubenswrapper[4812]: W0216 13:34:25.366832 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68168164_88dd_4c28_824f_e1702db05aea.slice/crio-8591abf65d798941bb85fdb8d72141a746d54422eea8d59092312439f402602b WatchSource:0}: Error finding container 8591abf65d798941bb85fdb8d72141a746d54422eea8d59092312439f402602b: Status 404 returned error can't find the container with id 8591abf65d798941bb85fdb8d72141a746d54422eea8d59092312439f402602b Feb 16 13:34:25 crc kubenswrapper[4812]: I0216 13:34:25.537997 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:34:25 crc kubenswrapper[4812]: I0216 13:34:25.857255 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lw64z" event={"ID":"68168164-88dd-4c28-824f-e1702db05aea","Type":"ContainerStarted","Data":"8591abf65d798941bb85fdb8d72141a746d54422eea8d59092312439f402602b"} Feb 16 13:34:25 crc kubenswrapper[4812]: I0216 13:34:25.863192 4812 generic.go:334] "Generic (PLEG): container finished" podID="c1a9695b-636b-4b29-a6dd-4e0708706b74" containerID="e130ecd9d28e4102e73d87046fcf9b46f5f152f109446dd01a647e2379cd8469" exitCode=0 Feb 16 13:34:25 crc kubenswrapper[4812]: I0216 13:34:25.863241 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fjz4f" event={"ID":"c1a9695b-636b-4b29-a6dd-4e0708706b74","Type":"ContainerDied","Data":"e130ecd9d28e4102e73d87046fcf9b46f5f152f109446dd01a647e2379cd8469"} Feb 16 13:34:25 crc kubenswrapper[4812]: I0216 13:34:25.918675 4812 patch_prober.go:28] interesting pod/router-default-5444994796-b7psd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 13:34:25 crc kubenswrapper[4812]: [-]has-synced failed: reason withheld Feb 16 13:34:25 crc kubenswrapper[4812]: [+]process-running ok Feb 16 13:34:25 crc kubenswrapper[4812]: healthz check failed Feb 16 13:34:25 crc kubenswrapper[4812]: I0216 13:34:25.918800 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b7psd" podUID="5bd1b4d8-80f4-4044-891b-a5e3450a0f48" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 13:34:26 crc kubenswrapper[4812]: I0216 13:34:26.003508 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-pxwbj" Feb 16 13:34:26 crc kubenswrapper[4812]: I0216 13:34:26.209371 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 13:34:26 crc kubenswrapper[4812]: I0216 13:34:26.210383 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 13:34:26 crc kubenswrapper[4812]: I0216 13:34:26.268288 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/108c709d-3892-437c-8389-eacf83464173-kubelet-dir\") pod \"108c709d-3892-437c-8389-eacf83464173\" (UID: \"108c709d-3892-437c-8389-eacf83464173\") " Feb 16 13:34:26 crc kubenswrapper[4812]: I0216 13:34:26.268379 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/108c709d-3892-437c-8389-eacf83464173-kube-api-access\") pod \"108c709d-3892-437c-8389-eacf83464173\" (UID: \"108c709d-3892-437c-8389-eacf83464173\") " Feb 16 13:34:26 crc kubenswrapper[4812]: I0216 13:34:26.268405 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/108c709d-3892-437c-8389-eacf83464173-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "108c709d-3892-437c-8389-eacf83464173" (UID: "108c709d-3892-437c-8389-eacf83464173"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:34:26 crc kubenswrapper[4812]: I0216 13:34:26.268427 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0bd26c64-69c9-4dcb-bd15-8f3a7f156cac-kube-api-access\") pod \"0bd26c64-69c9-4dcb-bd15-8f3a7f156cac\" (UID: \"0bd26c64-69c9-4dcb-bd15-8f3a7f156cac\") " Feb 16 13:34:26 crc kubenswrapper[4812]: I0216 13:34:26.268488 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0bd26c64-69c9-4dcb-bd15-8f3a7f156cac-kubelet-dir\") pod \"0bd26c64-69c9-4dcb-bd15-8f3a7f156cac\" (UID: \"0bd26c64-69c9-4dcb-bd15-8f3a7f156cac\") " Feb 16 13:34:26 crc kubenswrapper[4812]: I0216 13:34:26.268722 4812 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/108c709d-3892-437c-8389-eacf83464173-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 13:34:26 crc kubenswrapper[4812]: I0216 13:34:26.268775 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0bd26c64-69c9-4dcb-bd15-8f3a7f156cac-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0bd26c64-69c9-4dcb-bd15-8f3a7f156cac" (UID: "0bd26c64-69c9-4dcb-bd15-8f3a7f156cac"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:34:26 crc kubenswrapper[4812]: I0216 13:34:26.291127 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bd26c64-69c9-4dcb-bd15-8f3a7f156cac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0bd26c64-69c9-4dcb-bd15-8f3a7f156cac" (UID: "0bd26c64-69c9-4dcb-bd15-8f3a7f156cac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:34:26 crc kubenswrapper[4812]: I0216 13:34:26.291171 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/108c709d-3892-437c-8389-eacf83464173-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "108c709d-3892-437c-8389-eacf83464173" (UID: "108c709d-3892-437c-8389-eacf83464173"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:34:26 crc kubenswrapper[4812]: I0216 13:34:26.370615 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0bd26c64-69c9-4dcb-bd15-8f3a7f156cac-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 13:34:26 crc kubenswrapper[4812]: I0216 13:34:26.370648 4812 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0bd26c64-69c9-4dcb-bd15-8f3a7f156cac-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 13:34:26 crc kubenswrapper[4812]: I0216 13:34:26.370661 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/108c709d-3892-437c-8389-eacf83464173-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 13:34:26 crc kubenswrapper[4812]: I0216 13:34:26.899305 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"108c709d-3892-437c-8389-eacf83464173","Type":"ContainerDied","Data":"b0a4b0d83b89300755649b52e58d7bbbbef1309fb0af3feaac597dfced147d70"} Feb 16 13:34:26 crc kubenswrapper[4812]: I0216 13:34:26.899350 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0a4b0d83b89300755649b52e58d7bbbbef1309fb0af3feaac597dfced147d70" Feb 16 13:34:26 crc kubenswrapper[4812]: I0216 13:34:26.899373 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 13:34:26 crc kubenswrapper[4812]: I0216 13:34:26.901416 4812 generic.go:334] "Generic (PLEG): container finished" podID="68168164-88dd-4c28-824f-e1702db05aea" containerID="4c595c972a862b05758226cac26d669c828114cb8a3d6c37614970811d5bc39d" exitCode=0 Feb 16 13:34:26 crc kubenswrapper[4812]: I0216 13:34:26.901481 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lw64z" event={"ID":"68168164-88dd-4c28-824f-e1702db05aea","Type":"ContainerDied","Data":"4c595c972a862b05758226cac26d669c828114cb8a3d6c37614970811d5bc39d"} Feb 16 13:34:26 crc kubenswrapper[4812]: I0216 13:34:26.904153 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0bd26c64-69c9-4dcb-bd15-8f3a7f156cac","Type":"ContainerDied","Data":"eee14dd60549b5eaddd922882ed3b82bb6606245e8511149b3c7356571e3b3ae"} Feb 16 13:34:26 crc kubenswrapper[4812]: I0216 13:34:26.904191 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eee14dd60549b5eaddd922882ed3b82bb6606245e8511149b3c7356571e3b3ae" Feb 16 13:34:26 crc kubenswrapper[4812]: I0216 13:34:26.904249 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 13:34:26 crc kubenswrapper[4812]: I0216 13:34:26.919267 4812 patch_prober.go:28] interesting pod/router-default-5444994796-b7psd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 13:34:26 crc kubenswrapper[4812]: [-]has-synced failed: reason withheld Feb 16 13:34:26 crc kubenswrapper[4812]: [+]process-running ok Feb 16 13:34:26 crc kubenswrapper[4812]: healthz check failed Feb 16 13:34:26 crc kubenswrapper[4812]: I0216 13:34:26.919310 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b7psd" podUID="5bd1b4d8-80f4-4044-891b-a5e3450a0f48" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 13:34:27 crc kubenswrapper[4812]: I0216 13:34:27.920826 4812 patch_prober.go:28] interesting pod/router-default-5444994796-b7psd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 13:34:27 crc kubenswrapper[4812]: [-]has-synced failed: reason withheld Feb 16 13:34:27 crc kubenswrapper[4812]: [+]process-running ok Feb 16 13:34:27 crc kubenswrapper[4812]: healthz check failed Feb 16 13:34:27 crc kubenswrapper[4812]: I0216 13:34:27.920888 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b7psd" podUID="5bd1b4d8-80f4-4044-891b-a5e3450a0f48" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 13:34:28 crc kubenswrapper[4812]: I0216 13:34:28.481722 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" Feb 16 13:34:28 crc kubenswrapper[4812]: I0216 13:34:28.492656 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-wc6pn" Feb 16 13:34:28 crc kubenswrapper[4812]: I0216 13:34:28.918906 4812 patch_prober.go:28] interesting pod/router-default-5444994796-b7psd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 13:34:28 crc kubenswrapper[4812]: [-]has-synced failed: reason withheld Feb 16 13:34:28 crc kubenswrapper[4812]: [+]process-running ok Feb 16 13:34:28 crc kubenswrapper[4812]: healthz check failed Feb 16 13:34:28 crc kubenswrapper[4812]: I0216 13:34:28.918999 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b7psd" podUID="5bd1b4d8-80f4-4044-891b-a5e3450a0f48" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 13:34:29 crc kubenswrapper[4812]: I0216 13:34:29.917089 4812 patch_prober.go:28] interesting pod/router-default-5444994796-b7psd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 13:34:29 crc kubenswrapper[4812]: [-]has-synced failed: reason withheld Feb 16 13:34:29 crc kubenswrapper[4812]: [+]process-running ok Feb 16 13:34:29 crc kubenswrapper[4812]: healthz check failed Feb 16 13:34:29 crc kubenswrapper[4812]: I0216 13:34:29.917148 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b7psd" podUID="5bd1b4d8-80f4-4044-891b-a5e3450a0f48" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 13:34:30 crc kubenswrapper[4812]: I0216 13:34:30.918052 4812 patch_prober.go:28] interesting pod/router-default-5444994796-b7psd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 13:34:30 crc kubenswrapper[4812]: [-]has-synced failed: reason withheld Feb 16 13:34:30 crc kubenswrapper[4812]: [+]process-running ok Feb 16 13:34:30 crc kubenswrapper[4812]: healthz check failed Feb 16 13:34:30 crc kubenswrapper[4812]: I0216 13:34:30.918411 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b7psd" podUID="5bd1b4d8-80f4-4044-891b-a5e3450a0f48" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 13:34:31 crc kubenswrapper[4812]: I0216 13:34:31.918617 4812 patch_prober.go:28] interesting pod/router-default-5444994796-b7psd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 13:34:31 crc kubenswrapper[4812]: [-]has-synced failed: reason withheld Feb 16 13:34:31 crc kubenswrapper[4812]: [+]process-running ok Feb 16 13:34:31 crc kubenswrapper[4812]: healthz check failed Feb 16 13:34:31 crc kubenswrapper[4812]: I0216 13:34:31.918694 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b7psd" podUID="5bd1b4d8-80f4-4044-891b-a5e3450a0f48" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 13:34:32 crc kubenswrapper[4812]: I0216 13:34:32.916995 4812 patch_prober.go:28] interesting pod/router-default-5444994796-b7psd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 13:34:32 crc kubenswrapper[4812]: [-]has-synced failed: reason withheld Feb 16 13:34:32 crc kubenswrapper[4812]: [+]process-running ok Feb 16 13:34:32 crc kubenswrapper[4812]: healthz check failed Feb 16 13:34:32 crc kubenswrapper[4812]: I0216 13:34:32.917055 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b7psd" podUID="5bd1b4d8-80f4-4044-891b-a5e3450a0f48" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 13:34:33 crc kubenswrapper[4812]: I0216 13:34:33.188799 4812 patch_prober.go:28] interesting pod/console-f9d7485db-tpgqc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 16 13:34:33 crc kubenswrapper[4812]: I0216 13:34:33.188871 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-tpgqc" podUID="d8f24d90-54d8-4344-8140-c9fa919b456a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 16 13:34:33 crc kubenswrapper[4812]: I0216 13:34:33.196953 4812 patch_prober.go:28] interesting pod/downloads-7954f5f757-sv88f container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 16 13:34:33 crc kubenswrapper[4812]: I0216 13:34:33.196997 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sv88f" podUID="c221ee5f-91c7-4ca7-9567-55cd7bd72beb" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 16 13:34:33 crc kubenswrapper[4812]: I0216 13:34:33.197307 4812 patch_prober.go:28] interesting pod/downloads-7954f5f757-sv88f container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 16 13:34:33 crc kubenswrapper[4812]: I0216 13:34:33.197343 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-sv88f" podUID="c221ee5f-91c7-4ca7-9567-55cd7bd72beb" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 16 13:34:33 crc kubenswrapper[4812]: I0216 13:34:33.918308 4812 patch_prober.go:28] interesting pod/router-default-5444994796-b7psd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 13:34:33 crc kubenswrapper[4812]: [-]has-synced failed: reason withheld Feb 16 13:34:33 crc kubenswrapper[4812]: [+]process-running ok Feb 16 13:34:33 crc kubenswrapper[4812]: healthz check failed Feb 16 13:34:33 crc kubenswrapper[4812]: I0216 13:34:33.918486 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b7psd" podUID="5bd1b4d8-80f4-4044-891b-a5e3450a0f48" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 13:34:34 crc kubenswrapper[4812]: I0216 13:34:34.918098 4812 patch_prober.go:28] interesting pod/router-default-5444994796-b7psd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 13:34:34 crc kubenswrapper[4812]: [+]has-synced ok Feb 16 13:34:34 crc kubenswrapper[4812]: [+]process-running ok Feb 16 13:34:34 crc kubenswrapper[4812]: healthz check failed Feb 16 13:34:34 crc kubenswrapper[4812]: I0216 13:34:34.918175 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b7psd" podUID="5bd1b4d8-80f4-4044-891b-a5e3450a0f48" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 13:34:35 crc kubenswrapper[4812]: I0216 13:34:35.919268 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-b7psd" Feb 16 13:34:35 crc kubenswrapper[4812]: I0216 13:34:35.921989 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-b7psd" Feb 16 13:34:35 crc kubenswrapper[4812]: I0216 13:34:35.933386 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d2a1f0c6-cafa-4c67-a2ad-d6003e88613c-metrics-certs\") pod \"network-metrics-daemon-szt79\" (UID: \"d2a1f0c6-cafa-4c67-a2ad-d6003e88613c\") " pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:34:35 crc kubenswrapper[4812]: I0216 13:34:35.955553 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d2a1f0c6-cafa-4c67-a2ad-d6003e88613c-metrics-certs\") pod \"network-metrics-daemon-szt79\" (UID: \"d2a1f0c6-cafa-4c67-a2ad-d6003e88613c\") " pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:34:36 crc kubenswrapper[4812]: I0216 13:34:36.201238 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-szt79" Feb 16 13:34:38 crc kubenswrapper[4812]: I0216 13:34:38.026754 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-tnqj2_c89a7bcd-9899-41bb-81c9-c1dc56f1fd6e/cluster-samples-operator/0.log" Feb 16 13:34:38 crc kubenswrapper[4812]: I0216 13:34:38.027043 4812 generic.go:334] "Generic (PLEG): container finished" podID="c89a7bcd-9899-41bb-81c9-c1dc56f1fd6e" containerID="4a726caa87e0cd4c8d456b1ada2167aa9923426a5a34599c6c494d8ffcac8fb1" exitCode=2 Feb 16 13:34:38 crc kubenswrapper[4812]: I0216 13:34:38.027079 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tnqj2" event={"ID":"c89a7bcd-9899-41bb-81c9-c1dc56f1fd6e","Type":"ContainerDied","Data":"4a726caa87e0cd4c8d456b1ada2167aa9923426a5a34599c6c494d8ffcac8fb1"} Feb 16 13:34:38 crc kubenswrapper[4812]: I0216 13:34:38.027666 4812 scope.go:117] "RemoveContainer" containerID="4a726caa87e0cd4c8d456b1ada2167aa9923426a5a34599c6c494d8ffcac8fb1" Feb 16 13:34:39 crc kubenswrapper[4812]: I0216 13:34:39.919568 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-72lrh"] Feb 16 13:34:39 crc kubenswrapper[4812]: I0216 13:34:39.919825 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-72lrh" podUID="5be0ecd5-70de-4fa9-abcc-685cef55d530" containerName="controller-manager" containerID="cri-o://c95b1163627d12ca101b1b86869f286aa23046af728713ee49a85d4a096302fb" gracePeriod=30 Feb 16 13:34:39 crc kubenswrapper[4812]: I0216 13:34:39.945066 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5sss2"] Feb 16 13:34:39 crc kubenswrapper[4812]: I0216 13:34:39.945327 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5sss2" podUID="1dcfa0e5-1712-4411-afe5-e922c185b120" containerName="route-controller-manager" containerID="cri-o://f06a89a22d1996e3d6cbc87180eb6522ce46d81b424eaf1c827c0bd87358783f" gracePeriod=30 Feb 16 13:34:40 crc kubenswrapper[4812]: I0216 13:34:40.036858 4812 generic.go:334] "Generic (PLEG): container finished" podID="5be0ecd5-70de-4fa9-abcc-685cef55d530" containerID="c95b1163627d12ca101b1b86869f286aa23046af728713ee49a85d4a096302fb" exitCode=0 Feb 16 13:34:40 crc kubenswrapper[4812]: I0216 13:34:40.036912 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-72lrh" event={"ID":"5be0ecd5-70de-4fa9-abcc-685cef55d530","Type":"ContainerDied","Data":"c95b1163627d12ca101b1b86869f286aa23046af728713ee49a85d4a096302fb"} Feb 16 13:34:41 crc kubenswrapper[4812]: I0216 13:34:41.043291 4812 generic.go:334] "Generic (PLEG): container finished" podID="1dcfa0e5-1712-4411-afe5-e922c185b120" containerID="f06a89a22d1996e3d6cbc87180eb6522ce46d81b424eaf1c827c0bd87358783f" exitCode=0 Feb 16 13:34:41 crc kubenswrapper[4812]: I0216 13:34:41.043380 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5sss2" event={"ID":"1dcfa0e5-1712-4411-afe5-e922c185b120","Type":"ContainerDied","Data":"f06a89a22d1996e3d6cbc87180eb6522ce46d81b424eaf1c827c0bd87358783f"} Feb 16 13:34:42 crc kubenswrapper[4812]: I0216 13:34:42.521748 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:34:43 crc kubenswrapper[4812]: I0216 13:34:43.193544 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-tpgqc" Feb 16 13:34:43 crc kubenswrapper[4812]: I0216 13:34:43.197539 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-tpgqc" Feb 16 13:34:43 crc kubenswrapper[4812]: I0216 13:34:43.205131 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-sv88f" Feb 16 13:34:43 crc kubenswrapper[4812]: I0216 13:34:43.810495 4812 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-5sss2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 16 13:34:43 crc kubenswrapper[4812]: I0216 13:34:43.810568 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5sss2" podUID="1dcfa0e5-1712-4411-afe5-e922c185b120" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 16 13:34:44 crc kubenswrapper[4812]: I0216 13:34:44.142800 4812 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-72lrh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 13:34:44 crc kubenswrapper[4812]: I0216 13:34:44.142846 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-72lrh" podUID="5be0ecd5-70de-4fa9-abcc-685cef55d530" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 16 13:34:44 crc kubenswrapper[4812]: I0216 13:34:44.549049 4812 patch_prober.go:28] interesting pod/machine-config-daemon-c6mn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:34:44 crc kubenswrapper[4812]: I0216 13:34:44.549123 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:34:48 crc kubenswrapper[4812]: I0216 13:34:48.879335 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5sss2" Feb 16 13:34:48 crc kubenswrapper[4812]: I0216 13:34:48.913752 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-dlrh4"] Feb 16 13:34:48 crc kubenswrapper[4812]: E0216 13:34:48.914016 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dcfa0e5-1712-4411-afe5-e922c185b120" containerName="route-controller-manager" Feb 16 13:34:48 crc kubenswrapper[4812]: I0216 13:34:48.914031 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dcfa0e5-1712-4411-afe5-e922c185b120" containerName="route-controller-manager" Feb 16 13:34:48 crc kubenswrapper[4812]: E0216 13:34:48.914042 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="108c709d-3892-437c-8389-eacf83464173" containerName="pruner" Feb 16 13:34:48 crc kubenswrapper[4812]: I0216 13:34:48.914052 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="108c709d-3892-437c-8389-eacf83464173" containerName="pruner" Feb 16 13:34:48 crc kubenswrapper[4812]: E0216 13:34:48.914074 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bd26c64-69c9-4dcb-bd15-8f3a7f156cac" containerName="pruner" Feb 16 13:34:48 crc kubenswrapper[4812]: I0216 13:34:48.914083 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bd26c64-69c9-4dcb-bd15-8f3a7f156cac" containerName="pruner" Feb 16 13:34:48 crc kubenswrapper[4812]: I0216 13:34:48.914258 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bd26c64-69c9-4dcb-bd15-8f3a7f156cac" containerName="pruner" Feb 16 13:34:48 crc kubenswrapper[4812]: I0216 13:34:48.914272 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dcfa0e5-1712-4411-afe5-e922c185b120" containerName="route-controller-manager" Feb 16 13:34:48 crc kubenswrapper[4812]: I0216 13:34:48.914296 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="108c709d-3892-437c-8389-eacf83464173" containerName="pruner" Feb 16 13:34:48 crc kubenswrapper[4812]: I0216 13:34:48.914762 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-dlrh4" Feb 16 13:34:48 crc kubenswrapper[4812]: I0216 13:34:48.917092 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-dlrh4"] Feb 16 13:34:49 crc kubenswrapper[4812]: I0216 13:34:49.008542 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84w29\" (UniqueName: \"kubernetes.io/projected/1dcfa0e5-1712-4411-afe5-e922c185b120-kube-api-access-84w29\") pod \"1dcfa0e5-1712-4411-afe5-e922c185b120\" (UID: \"1dcfa0e5-1712-4411-afe5-e922c185b120\") " Feb 16 13:34:49 crc kubenswrapper[4812]: I0216 13:34:49.008604 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dcfa0e5-1712-4411-afe5-e922c185b120-config\") pod \"1dcfa0e5-1712-4411-afe5-e922c185b120\" (UID: \"1dcfa0e5-1712-4411-afe5-e922c185b120\") " Feb 16 13:34:49 crc kubenswrapper[4812]: I0216 13:34:49.008667 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dcfa0e5-1712-4411-afe5-e922c185b120-serving-cert\") pod \"1dcfa0e5-1712-4411-afe5-e922c185b120\" (UID: \"1dcfa0e5-1712-4411-afe5-e922c185b120\") " Feb 16 13:34:49 crc kubenswrapper[4812]: I0216 13:34:49.009500 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dcfa0e5-1712-4411-afe5-e922c185b120-config" (OuterVolumeSpecName: "config") pod "1dcfa0e5-1712-4411-afe5-e922c185b120" (UID: "1dcfa0e5-1712-4411-afe5-e922c185b120"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:34:49 crc kubenswrapper[4812]: I0216 13:34:49.009644 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1dcfa0e5-1712-4411-afe5-e922c185b120-client-ca\") pod \"1dcfa0e5-1712-4411-afe5-e922c185b120\" (UID: \"1dcfa0e5-1712-4411-afe5-e922c185b120\") " Feb 16 13:34:49 crc kubenswrapper[4812]: I0216 13:34:49.009882 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8t68\" (UniqueName: \"kubernetes.io/projected/7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b-kube-api-access-d8t68\") pod \"route-controller-manager-6bb46c8d9c-dlrh4\" (UID: \"7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-dlrh4" Feb 16 13:34:49 crc kubenswrapper[4812]: I0216 13:34:49.009938 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b-config\") pod \"route-controller-manager-6bb46c8d9c-dlrh4\" (UID: \"7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-dlrh4" Feb 16 13:34:49 crc kubenswrapper[4812]: I0216 13:34:49.010180 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dcfa0e5-1712-4411-afe5-e922c185b120-client-ca" (OuterVolumeSpecName: "client-ca") pod "1dcfa0e5-1712-4411-afe5-e922c185b120" (UID: "1dcfa0e5-1712-4411-afe5-e922c185b120"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:34:49 crc kubenswrapper[4812]: I0216 13:34:49.010197 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b-serving-cert\") pod \"route-controller-manager-6bb46c8d9c-dlrh4\" (UID: \"7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-dlrh4" Feb 16 13:34:49 crc kubenswrapper[4812]: I0216 13:34:49.010494 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b-client-ca\") pod \"route-controller-manager-6bb46c8d9c-dlrh4\" (UID: \"7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-dlrh4" Feb 16 13:34:49 crc kubenswrapper[4812]: I0216 13:34:49.010659 4812 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1dcfa0e5-1712-4411-afe5-e922c185b120-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 13:34:49 crc kubenswrapper[4812]: I0216 13:34:49.010678 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dcfa0e5-1712-4411-afe5-e922c185b120-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:34:49 crc kubenswrapper[4812]: I0216 13:34:49.014464 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dcfa0e5-1712-4411-afe5-e922c185b120-kube-api-access-84w29" (OuterVolumeSpecName: "kube-api-access-84w29") pod "1dcfa0e5-1712-4411-afe5-e922c185b120" (UID: "1dcfa0e5-1712-4411-afe5-e922c185b120"). InnerVolumeSpecName "kube-api-access-84w29". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:34:49 crc kubenswrapper[4812]: I0216 13:34:49.014637 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dcfa0e5-1712-4411-afe5-e922c185b120-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1dcfa0e5-1712-4411-afe5-e922c185b120" (UID: "1dcfa0e5-1712-4411-afe5-e922c185b120"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:34:49 crc kubenswrapper[4812]: I0216 13:34:49.095523 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5sss2" event={"ID":"1dcfa0e5-1712-4411-afe5-e922c185b120","Type":"ContainerDied","Data":"8a67c64e8499e331968e6f63acd9ee0c0c61e3174a9cabbb05be7ed9e60a19d3"} Feb 16 13:34:49 crc kubenswrapper[4812]: I0216 13:34:49.095573 4812 scope.go:117] "RemoveContainer" containerID="f06a89a22d1996e3d6cbc87180eb6522ce46d81b424eaf1c827c0bd87358783f" Feb 16 13:34:49 crc kubenswrapper[4812]: I0216 13:34:49.095599 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5sss2" Feb 16 13:34:49 crc kubenswrapper[4812]: I0216 13:34:49.111037 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b-client-ca\") pod \"route-controller-manager-6bb46c8d9c-dlrh4\" (UID: \"7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-dlrh4" Feb 16 13:34:49 crc kubenswrapper[4812]: I0216 13:34:49.111999 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b-client-ca\") pod \"route-controller-manager-6bb46c8d9c-dlrh4\" (UID: \"7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-dlrh4" Feb 16 13:34:49 crc kubenswrapper[4812]: I0216 13:34:49.112100 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8t68\" (UniqueName: \"kubernetes.io/projected/7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b-kube-api-access-d8t68\") pod \"route-controller-manager-6bb46c8d9c-dlrh4\" (UID: \"7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-dlrh4" Feb 16 13:34:49 crc kubenswrapper[4812]: I0216 13:34:49.112151 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b-config\") pod \"route-controller-manager-6bb46c8d9c-dlrh4\" (UID: \"7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-dlrh4" Feb 16 13:34:49 crc kubenswrapper[4812]: I0216 13:34:49.112236 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b-serving-cert\") pod \"route-controller-manager-6bb46c8d9c-dlrh4\" (UID: \"7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-dlrh4" Feb 16 13:34:49 crc kubenswrapper[4812]: I0216 13:34:49.114211 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84w29\" (UniqueName: \"kubernetes.io/projected/1dcfa0e5-1712-4411-afe5-e922c185b120-kube-api-access-84w29\") on node \"crc\" DevicePath \"\"" Feb 16 13:34:49 crc kubenswrapper[4812]: I0216 13:34:49.114227 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b-config\") pod \"route-controller-manager-6bb46c8d9c-dlrh4\" (UID: \"7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-dlrh4" Feb 16 13:34:49 crc kubenswrapper[4812]: I0216 13:34:49.114234 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dcfa0e5-1712-4411-afe5-e922c185b120-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:34:49 crc kubenswrapper[4812]: I0216 13:34:49.123106 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b-serving-cert\") pod \"route-controller-manager-6bb46c8d9c-dlrh4\" (UID: \"7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-dlrh4" Feb 16 13:34:49 crc kubenswrapper[4812]: I0216 13:34:49.127584 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5sss2"] Feb 16 13:34:49 crc kubenswrapper[4812]: I0216 13:34:49.128917 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5sss2"] Feb 16 13:34:49 crc kubenswrapper[4812]: I0216 13:34:49.132809 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8t68\" (UniqueName: \"kubernetes.io/projected/7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b-kube-api-access-d8t68\") pod \"route-controller-manager-6bb46c8d9c-dlrh4\" (UID: \"7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-dlrh4" Feb 16 13:34:49 crc kubenswrapper[4812]: I0216 13:34:49.231615 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-dlrh4" Feb 16 13:34:49 crc kubenswrapper[4812]: I0216 13:34:49.886421 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dcfa0e5-1712-4411-afe5-e922c185b120" path="/var/lib/kubelet/pods/1dcfa0e5-1712-4411-afe5-e922c185b120/volumes" Feb 16 13:34:50 crc kubenswrapper[4812]: E0216 13:34:50.927405 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 16 13:34:50 crc kubenswrapper[4812]: E0216 13:34:50.927649 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-95c4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-cqhcl_openshift-marketplace(a297c2d9-88a8-4019-94f5-c1f5498bee86): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 13:34:50 crc kubenswrapper[4812]: E0216 13:34:50.928914 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-cqhcl" podUID="a297c2d9-88a8-4019-94f5-c1f5498bee86" Feb 16 13:34:53 crc kubenswrapper[4812]: E0216 13:34:53.161072 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-cqhcl" podUID="a297c2d9-88a8-4019-94f5-c1f5498bee86" Feb 16 13:34:53 crc kubenswrapper[4812]: I0216 13:34:53.229375 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-72lrh" Feb 16 13:34:53 crc kubenswrapper[4812]: I0216 13:34:53.261991 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-76c5dc67ff-7h5h8"] Feb 16 13:34:53 crc kubenswrapper[4812]: E0216 13:34:53.262301 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5be0ecd5-70de-4fa9-abcc-685cef55d530" containerName="controller-manager" Feb 16 13:34:53 crc kubenswrapper[4812]: I0216 13:34:53.262315 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="5be0ecd5-70de-4fa9-abcc-685cef55d530" containerName="controller-manager" Feb 16 13:34:53 crc kubenswrapper[4812]: I0216 13:34:53.263245 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="5be0ecd5-70de-4fa9-abcc-685cef55d530" containerName="controller-manager" Feb 16 13:34:53 crc kubenswrapper[4812]: I0216 13:34:53.263725 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76c5dc67ff-7h5h8" Feb 16 13:34:53 crc kubenswrapper[4812]: I0216 13:34:53.271289 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-76c5dc67ff-7h5h8"] Feb 16 13:34:53 crc kubenswrapper[4812]: E0216 13:34:53.287337 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 16 13:34:53 crc kubenswrapper[4812]: E0216 13:34:53.287531 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rzmm7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-gfhfv_openshift-marketplace(567e2fcc-e342-41e9-a406-4758f7c5551e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 13:34:53 crc kubenswrapper[4812]: E0216 13:34:53.289140 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-gfhfv" podUID="567e2fcc-e342-41e9-a406-4758f7c5551e" Feb 16 13:34:53 crc kubenswrapper[4812]: I0216 13:34:53.363463 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5be0ecd5-70de-4fa9-abcc-685cef55d530-proxy-ca-bundles\") pod \"5be0ecd5-70de-4fa9-abcc-685cef55d530\" (UID: \"5be0ecd5-70de-4fa9-abcc-685cef55d530\") " Feb 16 13:34:53 crc kubenswrapper[4812]: I0216 13:34:53.363509 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5be0ecd5-70de-4fa9-abcc-685cef55d530-serving-cert\") pod \"5be0ecd5-70de-4fa9-abcc-685cef55d530\" (UID: \"5be0ecd5-70de-4fa9-abcc-685cef55d530\") " Feb 16 13:34:53 crc kubenswrapper[4812]: I0216 13:34:53.363532 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5be0ecd5-70de-4fa9-abcc-685cef55d530-client-ca\") pod \"5be0ecd5-70de-4fa9-abcc-685cef55d530\" (UID: \"5be0ecd5-70de-4fa9-abcc-685cef55d530\") " Feb 16 13:34:53 crc kubenswrapper[4812]: I0216 13:34:53.363620 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ct7z2\" (UniqueName: \"kubernetes.io/projected/5be0ecd5-70de-4fa9-abcc-685cef55d530-kube-api-access-ct7z2\") pod \"5be0ecd5-70de-4fa9-abcc-685cef55d530\" (UID: \"5be0ecd5-70de-4fa9-abcc-685cef55d530\") " Feb 16 13:34:53 crc kubenswrapper[4812]: I0216 13:34:53.363637 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5be0ecd5-70de-4fa9-abcc-685cef55d530-config\") pod \"5be0ecd5-70de-4fa9-abcc-685cef55d530\" (UID: \"5be0ecd5-70de-4fa9-abcc-685cef55d530\") " Feb 16 13:34:53 crc kubenswrapper[4812]: I0216 13:34:53.364067 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39832661-a5f7-43f8-825a-3814ef674ee0-serving-cert\") pod \"controller-manager-76c5dc67ff-7h5h8\" (UID: \"39832661-a5f7-43f8-825a-3814ef674ee0\") " pod="openshift-controller-manager/controller-manager-76c5dc67ff-7h5h8" Feb 16 13:34:53 crc kubenswrapper[4812]: I0216 13:34:53.364113 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j27b\" (UniqueName: \"kubernetes.io/projected/39832661-a5f7-43f8-825a-3814ef674ee0-kube-api-access-6j27b\") pod \"controller-manager-76c5dc67ff-7h5h8\" (UID: \"39832661-a5f7-43f8-825a-3814ef674ee0\") " pod="openshift-controller-manager/controller-manager-76c5dc67ff-7h5h8" Feb 16 13:34:53 crc kubenswrapper[4812]: I0216 13:34:53.364139 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39832661-a5f7-43f8-825a-3814ef674ee0-client-ca\") pod \"controller-manager-76c5dc67ff-7h5h8\" (UID: \"39832661-a5f7-43f8-825a-3814ef674ee0\") " pod="openshift-controller-manager/controller-manager-76c5dc67ff-7h5h8" Feb 16 13:34:53 crc kubenswrapper[4812]: I0216 13:34:53.364180 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39832661-a5f7-43f8-825a-3814ef674ee0-proxy-ca-bundles\") pod \"controller-manager-76c5dc67ff-7h5h8\" (UID: \"39832661-a5f7-43f8-825a-3814ef674ee0\") " pod="openshift-controller-manager/controller-manager-76c5dc67ff-7h5h8" Feb 16 13:34:53 crc kubenswrapper[4812]: I0216 13:34:53.364213 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39832661-a5f7-43f8-825a-3814ef674ee0-config\") pod \"controller-manager-76c5dc67ff-7h5h8\" (UID: \"39832661-a5f7-43f8-825a-3814ef674ee0\") " pod="openshift-controller-manager/controller-manager-76c5dc67ff-7h5h8" Feb 16 13:34:53 crc kubenswrapper[4812]: I0216 13:34:53.364731 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5be0ecd5-70de-4fa9-abcc-685cef55d530-config" (OuterVolumeSpecName: "config") pod "5be0ecd5-70de-4fa9-abcc-685cef55d530" (UID: "5be0ecd5-70de-4fa9-abcc-685cef55d530"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:34:53 crc kubenswrapper[4812]: I0216 13:34:53.365184 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5be0ecd5-70de-4fa9-abcc-685cef55d530-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5be0ecd5-70de-4fa9-abcc-685cef55d530" (UID: "5be0ecd5-70de-4fa9-abcc-685cef55d530"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:34:53 crc kubenswrapper[4812]: I0216 13:34:53.368357 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5be0ecd5-70de-4fa9-abcc-685cef55d530-client-ca" (OuterVolumeSpecName: "client-ca") pod "5be0ecd5-70de-4fa9-abcc-685cef55d530" (UID: "5be0ecd5-70de-4fa9-abcc-685cef55d530"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:34:53 crc kubenswrapper[4812]: I0216 13:34:53.374553 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5be0ecd5-70de-4fa9-abcc-685cef55d530-kube-api-access-ct7z2" (OuterVolumeSpecName: "kube-api-access-ct7z2") pod "5be0ecd5-70de-4fa9-abcc-685cef55d530" (UID: "5be0ecd5-70de-4fa9-abcc-685cef55d530"). InnerVolumeSpecName "kube-api-access-ct7z2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:34:53 crc kubenswrapper[4812]: I0216 13:34:53.374949 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5be0ecd5-70de-4fa9-abcc-685cef55d530-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5be0ecd5-70de-4fa9-abcc-685cef55d530" (UID: "5be0ecd5-70de-4fa9-abcc-685cef55d530"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:34:53 crc kubenswrapper[4812]: I0216 13:34:53.464903 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39832661-a5f7-43f8-825a-3814ef674ee0-client-ca\") pod \"controller-manager-76c5dc67ff-7h5h8\" (UID: \"39832661-a5f7-43f8-825a-3814ef674ee0\") " pod="openshift-controller-manager/controller-manager-76c5dc67ff-7h5h8" Feb 16 13:34:53 crc kubenswrapper[4812]: I0216 13:34:53.464951 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39832661-a5f7-43f8-825a-3814ef674ee0-proxy-ca-bundles\") pod \"controller-manager-76c5dc67ff-7h5h8\" (UID: \"39832661-a5f7-43f8-825a-3814ef674ee0\") " pod="openshift-controller-manager/controller-manager-76c5dc67ff-7h5h8" Feb 16 13:34:53 crc kubenswrapper[4812]: I0216 13:34:53.464983 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39832661-a5f7-43f8-825a-3814ef674ee0-config\") pod \"controller-manager-76c5dc67ff-7h5h8\" (UID: \"39832661-a5f7-43f8-825a-3814ef674ee0\") " pod="openshift-controller-manager/controller-manager-76c5dc67ff-7h5h8" Feb 16 13:34:53 crc kubenswrapper[4812]: I0216 13:34:53.465010 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39832661-a5f7-43f8-825a-3814ef674ee0-serving-cert\") pod \"controller-manager-76c5dc67ff-7h5h8\" (UID: \"39832661-a5f7-43f8-825a-3814ef674ee0\") " pod="openshift-controller-manager/controller-manager-76c5dc67ff-7h5h8" Feb 16 13:34:53 crc kubenswrapper[4812]: I0216 13:34:53.466231 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39832661-a5f7-43f8-825a-3814ef674ee0-client-ca\") pod \"controller-manager-76c5dc67ff-7h5h8\" (UID: \"39832661-a5f7-43f8-825a-3814ef674ee0\") " pod="openshift-controller-manager/controller-manager-76c5dc67ff-7h5h8" Feb 16 13:34:53 crc kubenswrapper[4812]: I0216 13:34:53.466347 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39832661-a5f7-43f8-825a-3814ef674ee0-config\") pod \"controller-manager-76c5dc67ff-7h5h8\" (UID: \"39832661-a5f7-43f8-825a-3814ef674ee0\") " pod="openshift-controller-manager/controller-manager-76c5dc67ff-7h5h8" Feb 16 13:34:53 crc kubenswrapper[4812]: I0216 13:34:53.466418 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6j27b\" (UniqueName: \"kubernetes.io/projected/39832661-a5f7-43f8-825a-3814ef674ee0-kube-api-access-6j27b\") pod \"controller-manager-76c5dc67ff-7h5h8\" (UID: \"39832661-a5f7-43f8-825a-3814ef674ee0\") " pod="openshift-controller-manager/controller-manager-76c5dc67ff-7h5h8" Feb 16 13:34:53 crc kubenswrapper[4812]: I0216 13:34:53.466481 4812 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5be0ecd5-70de-4fa9-abcc-685cef55d530-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 13:34:53 crc kubenswrapper[4812]: I0216 13:34:53.466492 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ct7z2\" (UniqueName: \"kubernetes.io/projected/5be0ecd5-70de-4fa9-abcc-685cef55d530-kube-api-access-ct7z2\") on node \"crc\" DevicePath \"\"" Feb 16 13:34:53 crc kubenswrapper[4812]: I0216 13:34:53.466501 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5be0ecd5-70de-4fa9-abcc-685cef55d530-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:34:53 crc kubenswrapper[4812]: I0216 13:34:53.466510 4812 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5be0ecd5-70de-4fa9-abcc-685cef55d530-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 13:34:53 crc kubenswrapper[4812]: I0216 13:34:53.466519 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5be0ecd5-70de-4fa9-abcc-685cef55d530-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:34:53 crc kubenswrapper[4812]: I0216 13:34:53.466772 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39832661-a5f7-43f8-825a-3814ef674ee0-proxy-ca-bundles\") pod \"controller-manager-76c5dc67ff-7h5h8\" (UID: \"39832661-a5f7-43f8-825a-3814ef674ee0\") " pod="openshift-controller-manager/controller-manager-76c5dc67ff-7h5h8" Feb 16 13:34:53 crc kubenswrapper[4812]: I0216 13:34:53.469471 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39832661-a5f7-43f8-825a-3814ef674ee0-serving-cert\") pod \"controller-manager-76c5dc67ff-7h5h8\" (UID: \"39832661-a5f7-43f8-825a-3814ef674ee0\") " pod="openshift-controller-manager/controller-manager-76c5dc67ff-7h5h8" Feb 16 13:34:53 crc kubenswrapper[4812]: E0216 13:34:53.476611 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 16 13:34:53 crc kubenswrapper[4812]: E0216 13:34:53.476803 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kvng8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-t9zmh_openshift-marketplace(2984d252-d29e-49b5-87ed-9ce7d19edc6d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 13:34:53 crc kubenswrapper[4812]: E0216 13:34:53.478009 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-t9zmh" podUID="2984d252-d29e-49b5-87ed-9ce7d19edc6d" Feb 16 13:34:53 crc kubenswrapper[4812]: I0216 13:34:53.483822 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j27b\" (UniqueName: \"kubernetes.io/projected/39832661-a5f7-43f8-825a-3814ef674ee0-kube-api-access-6j27b\") pod \"controller-manager-76c5dc67ff-7h5h8\" (UID: \"39832661-a5f7-43f8-825a-3814ef674ee0\") " pod="openshift-controller-manager/controller-manager-76c5dc67ff-7h5h8" Feb 16 13:34:53 crc kubenswrapper[4812]: I0216 13:34:53.587238 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-szt79"] Feb 16 13:34:53 crc kubenswrapper[4812]: W0216 13:34:53.595627 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2a1f0c6_cafa_4c67_a2ad_d6003e88613c.slice/crio-7895064440b81fbaf317911d3951641367a66672c461cc03b45748259af67dfb WatchSource:0}: Error finding container 7895064440b81fbaf317911d3951641367a66672c461cc03b45748259af67dfb: Status 404 returned error can't find the container with id 7895064440b81fbaf317911d3951641367a66672c461cc03b45748259af67dfb Feb 16 13:34:53 crc kubenswrapper[4812]: I0216 13:34:53.663713 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76c5dc67ff-7h5h8" Feb 16 13:34:53 crc kubenswrapper[4812]: I0216 13:34:53.685842 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-dlrh4"] Feb 16 13:34:53 crc kubenswrapper[4812]: W0216 13:34:53.693113 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a0c99fc_1734_47c4_99d2_f26f8f5b9d5b.slice/crio-7c6f8f9b6df56fd6c5cd34d5fd946380ceea2c0f5d5a4d2e02f8edfbe6274778 WatchSource:0}: Error finding container 7c6f8f9b6df56fd6c5cd34d5fd946380ceea2c0f5d5a4d2e02f8edfbe6274778: Status 404 returned error can't find the container with id 7c6f8f9b6df56fd6c5cd34d5fd946380ceea2c0f5d5a4d2e02f8edfbe6274778 Feb 16 13:34:53 crc kubenswrapper[4812]: E0216 13:34:53.952347 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 16 13:34:53 crc kubenswrapper[4812]: E0216 13:34:53.952797 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5j5jt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-cxh5t_openshift-marketplace(f4e6d69a-43ea-4b9b-a150-640b86bfbf42): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 13:34:53 crc kubenswrapper[4812]: E0216 13:34:53.954026 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-cxh5t" podUID="f4e6d69a-43ea-4b9b-a150-640b86bfbf42" Feb 16 13:34:54 crc kubenswrapper[4812]: I0216 13:34:54.126119 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-dlrh4" event={"ID":"7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b","Type":"ContainerStarted","Data":"5ac8a599a65f276ab0d6e980a7cd3e4f95462319c7cf4b6df0c202129058c574"} Feb 16 13:34:54 crc kubenswrapper[4812]: I0216 13:34:54.126167 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-dlrh4" event={"ID":"7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b","Type":"ContainerStarted","Data":"7c6f8f9b6df56fd6c5cd34d5fd946380ceea2c0f5d5a4d2e02f8edfbe6274778"} Feb 16 13:34:54 crc kubenswrapper[4812]: I0216 13:34:54.136193 4812 generic.go:334] "Generic (PLEG): container finished" podID="c25320a0-e0f3-40ae-b953-e249556bc4f6" containerID="3619fc495b87e40610f213629f283e31bdcb79c94f7aeda343c25e6b698b146e" exitCode=0 Feb 16 13:34:54 crc kubenswrapper[4812]: I0216 13:34:54.136294 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zt4tm" event={"ID":"c25320a0-e0f3-40ae-b953-e249556bc4f6","Type":"ContainerDied","Data":"3619fc495b87e40610f213629f283e31bdcb79c94f7aeda343c25e6b698b146e"} Feb 16 13:34:54 crc kubenswrapper[4812]: I0216 13:34:54.142268 4812 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-72lrh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: i/o timeout" start-of-body= Feb 16 13:34:54 crc kubenswrapper[4812]: I0216 13:34:54.142335 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-72lrh" podUID="5be0ecd5-70de-4fa9-abcc-685cef55d530" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: i/o timeout" Feb 16 13:34:54 crc kubenswrapper[4812]: I0216 13:34:54.143768 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-tnqj2_c89a7bcd-9899-41bb-81c9-c1dc56f1fd6e/cluster-samples-operator/0.log" Feb 16 13:34:54 crc kubenswrapper[4812]: I0216 13:34:54.144345 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tnqj2" event={"ID":"c89a7bcd-9899-41bb-81c9-c1dc56f1fd6e","Type":"ContainerStarted","Data":"981525a77e2e996a2982f0b2bbc9ef98ae669f885f2108efda9a21ecddb4e2b9"} Feb 16 13:34:54 crc kubenswrapper[4812]: I0216 13:34:54.151856 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-72lrh" event={"ID":"5be0ecd5-70de-4fa9-abcc-685cef55d530","Type":"ContainerDied","Data":"40bbe3d8760dfc430533c728e1a01a8b67fffb9c377f58ce09d446332c3938a7"} Feb 16 13:34:54 crc kubenswrapper[4812]: I0216 13:34:54.151922 4812 scope.go:117] "RemoveContainer" containerID="c95b1163627d12ca101b1b86869f286aa23046af728713ee49a85d4a096302fb" Feb 16 13:34:54 crc kubenswrapper[4812]: I0216 13:34:54.152063 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-72lrh" Feb 16 13:34:54 crc kubenswrapper[4812]: I0216 13:34:54.155033 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-szt79" event={"ID":"d2a1f0c6-cafa-4c67-a2ad-d6003e88613c","Type":"ContainerStarted","Data":"7895064440b81fbaf317911d3951641367a66672c461cc03b45748259af67dfb"} Feb 16 13:34:54 crc kubenswrapper[4812]: I0216 13:34:54.158330 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lw64z" event={"ID":"68168164-88dd-4c28-824f-e1702db05aea","Type":"ContainerStarted","Data":"edf0c3822f080ddb2b355578cda6d93bf5e1917c89fe43fc0afc354e0d669095"} Feb 16 13:34:54 crc kubenswrapper[4812]: E0216 13:34:54.162515 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-cxh5t" podUID="f4e6d69a-43ea-4b9b-a150-640b86bfbf42" Feb 16 13:34:54 crc kubenswrapper[4812]: E0216 13:34:54.162855 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-t9zmh" podUID="2984d252-d29e-49b5-87ed-9ce7d19edc6d" Feb 16 13:34:54 crc kubenswrapper[4812]: E0216 13:34:54.162986 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-gfhfv" podUID="567e2fcc-e342-41e9-a406-4758f7c5551e" Feb 16 13:34:54 crc kubenswrapper[4812]: E0216 13:34:54.206311 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 16 13:34:54 crc kubenswrapper[4812]: E0216 13:34:54.206831 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-87fbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-lbfqw_openshift-marketplace(1b3c17cd-2607-4379-9ff0-5ad26cfca6ce): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 13:34:54 crc kubenswrapper[4812]: E0216 13:34:54.208055 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-lbfqw" podUID="1b3c17cd-2607-4379-9ff0-5ad26cfca6ce" Feb 16 13:34:54 crc kubenswrapper[4812]: I0216 13:34:54.261674 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b5crl" Feb 16 13:34:54 crc kubenswrapper[4812]: I0216 13:34:54.267415 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-76c5dc67ff-7h5h8"] Feb 16 13:34:54 crc kubenswrapper[4812]: W0216 13:34:54.285646 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39832661_a5f7_43f8_825a_3814ef674ee0.slice/crio-fc688922f6eb596e1befb04e6ef1a8659600c77a556612c036434476cc3b6339 WatchSource:0}: Error finding container fc688922f6eb596e1befb04e6ef1a8659600c77a556612c036434476cc3b6339: Status 404 returned error can't find the container with id fc688922f6eb596e1befb04e6ef1a8659600c77a556612c036434476cc3b6339 Feb 16 13:34:54 crc kubenswrapper[4812]: I0216 13:34:54.295685 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-72lrh"] Feb 16 13:34:54 crc kubenswrapper[4812]: I0216 13:34:54.299581 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-72lrh"] Feb 16 13:34:54 crc kubenswrapper[4812]: E0216 13:34:54.580855 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 16 13:34:54 crc kubenswrapper[4812]: E0216 13:34:54.581020 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sddfp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-fjz4f_openshift-marketplace(c1a9695b-636b-4b29-a6dd-4e0708706b74): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 13:34:54 crc kubenswrapper[4812]: E0216 13:34:54.582231 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-fjz4f" podUID="c1a9695b-636b-4b29-a6dd-4e0708706b74" Feb 16 13:34:55 crc kubenswrapper[4812]: I0216 13:34:55.165902 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lw64z" event={"ID":"68168164-88dd-4c28-824f-e1702db05aea","Type":"ContainerDied","Data":"edf0c3822f080ddb2b355578cda6d93bf5e1917c89fe43fc0afc354e0d669095"} Feb 16 13:34:55 crc kubenswrapper[4812]: I0216 13:34:55.165836 4812 generic.go:334] "Generic (PLEG): container finished" podID="68168164-88dd-4c28-824f-e1702db05aea" containerID="edf0c3822f080ddb2b355578cda6d93bf5e1917c89fe43fc0afc354e0d669095" exitCode=0 Feb 16 13:34:55 crc kubenswrapper[4812]: I0216 13:34:55.168136 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-szt79" event={"ID":"d2a1f0c6-cafa-4c67-a2ad-d6003e88613c","Type":"ContainerStarted","Data":"348b6e16104d0f524cfd00b62ebd27f8871755b463bac8e6133c8309eab273b1"} Feb 16 13:34:55 crc kubenswrapper[4812]: I0216 13:34:55.168181 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-szt79" event={"ID":"d2a1f0c6-cafa-4c67-a2ad-d6003e88613c","Type":"ContainerStarted","Data":"f0470b72956366bded0a395ec736547da84fdec0eedd6c251584a6e2691f94b2"} Feb 16 13:34:55 crc kubenswrapper[4812]: E0216 13:34:55.170904 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lbfqw" podUID="1b3c17cd-2607-4379-9ff0-5ad26cfca6ce" Feb 16 13:34:55 crc kubenswrapper[4812]: I0216 13:34:55.171060 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76c5dc67ff-7h5h8" event={"ID":"39832661-a5f7-43f8-825a-3814ef674ee0","Type":"ContainerStarted","Data":"81ad61e8411cc7370c26134393db6dea0a2d56c2939f156ed8681355dd412d0b"} Feb 16 13:34:55 crc kubenswrapper[4812]: I0216 13:34:55.171083 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-76c5dc67ff-7h5h8" Feb 16 13:34:55 crc kubenswrapper[4812]: I0216 13:34:55.171092 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76c5dc67ff-7h5h8" event={"ID":"39832661-a5f7-43f8-825a-3814ef674ee0","Type":"ContainerStarted","Data":"fc688922f6eb596e1befb04e6ef1a8659600c77a556612c036434476cc3b6339"} Feb 16 13:34:55 crc kubenswrapper[4812]: I0216 13:34:55.171205 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-dlrh4" Feb 16 13:34:55 crc kubenswrapper[4812]: E0216 13:34:55.172122 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-fjz4f" podUID="c1a9695b-636b-4b29-a6dd-4e0708706b74" Feb 16 13:34:55 crc kubenswrapper[4812]: I0216 13:34:55.176642 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-76c5dc67ff-7h5h8" Feb 16 13:34:55 crc kubenswrapper[4812]: I0216 13:34:55.177557 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-dlrh4" Feb 16 13:34:55 crc kubenswrapper[4812]: I0216 13:34:55.208717 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-dlrh4" podStartSLOduration=16.208691336 podStartE2EDuration="16.208691336s" podCreationTimestamp="2026-02-16 13:34:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:55.205998912 +0000 UTC m=+184.270329613" watchObservedRunningTime="2026-02-16 13:34:55.208691336 +0000 UTC m=+184.273022037" Feb 16 13:34:55 crc kubenswrapper[4812]: I0216 13:34:55.223499 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-76c5dc67ff-7h5h8" podStartSLOduration=16.223480855 podStartE2EDuration="16.223480855s" podCreationTimestamp="2026-02-16 13:34:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:55.219969056 +0000 UTC m=+184.284299747" watchObservedRunningTime="2026-02-16 13:34:55.223480855 +0000 UTC m=+184.287811556" Feb 16 13:34:55 crc kubenswrapper[4812]: I0216 13:34:55.886880 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5be0ecd5-70de-4fa9-abcc-685cef55d530" path="/var/lib/kubelet/pods/5be0ecd5-70de-4fa9-abcc-685cef55d530/volumes" Feb 16 13:34:57 crc kubenswrapper[4812]: I0216 13:34:57.183146 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lw64z" event={"ID":"68168164-88dd-4c28-824f-e1702db05aea","Type":"ContainerStarted","Data":"9dc88e8eb87f53bd523817880084a3d1d47e457aa036736dad7b03360c76b25f"} Feb 16 13:34:57 crc kubenswrapper[4812]: I0216 13:34:57.186253 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zt4tm" event={"ID":"c25320a0-e0f3-40ae-b953-e249556bc4f6","Type":"ContainerStarted","Data":"12edc8073ea3225e00c47ffe27a771c12e43b82705d2c261411711f2d02e2232"} Feb 16 13:34:57 crc kubenswrapper[4812]: I0216 13:34:57.199742 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-szt79" podStartSLOduration=164.199727294 podStartE2EDuration="2m44.199727294s" podCreationTimestamp="2026-02-16 13:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:34:55.281496846 +0000 UTC m=+184.345827557" watchObservedRunningTime="2026-02-16 13:34:57.199727294 +0000 UTC m=+186.264057995" Feb 16 13:34:57 crc kubenswrapper[4812]: I0216 13:34:57.201824 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lw64z" podStartSLOduration=3.821300238 podStartE2EDuration="33.201817359s" podCreationTimestamp="2026-02-16 13:34:24 +0000 UTC" firstStartedPulling="2026-02-16 13:34:26.902996695 +0000 UTC m=+155.967327396" lastFinishedPulling="2026-02-16 13:34:56.283513816 +0000 UTC m=+185.347844517" observedRunningTime="2026-02-16 13:34:57.198413114 +0000 UTC m=+186.262743835" watchObservedRunningTime="2026-02-16 13:34:57.201817359 +0000 UTC m=+186.266148060" Feb 16 13:34:57 crc kubenswrapper[4812]: I0216 13:34:57.215876 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zt4tm" podStartSLOduration=2.711401122 podStartE2EDuration="36.215854035s" podCreationTimestamp="2026-02-16 13:34:21 +0000 UTC" firstStartedPulling="2026-02-16 13:34:22.668905284 +0000 UTC m=+151.733235985" lastFinishedPulling="2026-02-16 13:34:56.173358167 +0000 UTC m=+185.237688898" observedRunningTime="2026-02-16 13:34:57.212951315 +0000 UTC m=+186.277282036" watchObservedRunningTime="2026-02-16 13:34:57.215854035 +0000 UTC m=+186.280184736" Feb 16 13:34:59 crc kubenswrapper[4812]: I0216 13:34:59.921696 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-76c5dc67ff-7h5h8"] Feb 16 13:34:59 crc kubenswrapper[4812]: I0216 13:34:59.922354 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-76c5dc67ff-7h5h8" podUID="39832661-a5f7-43f8-825a-3814ef674ee0" containerName="controller-manager" containerID="cri-o://81ad61e8411cc7370c26134393db6dea0a2d56c2939f156ed8681355dd412d0b" gracePeriod=30 Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.025304 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-dlrh4"] Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.025531 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-dlrh4" podUID="7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b" containerName="route-controller-manager" containerID="cri-o://5ac8a599a65f276ab0d6e980a7cd3e4f95462319c7cf4b6df0c202129058c574" gracePeriod=30 Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.215309 4812 generic.go:334] "Generic (PLEG): container finished" podID="7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b" containerID="5ac8a599a65f276ab0d6e980a7cd3e4f95462319c7cf4b6df0c202129058c574" exitCode=0 Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.215374 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-dlrh4" event={"ID":"7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b","Type":"ContainerDied","Data":"5ac8a599a65f276ab0d6e980a7cd3e4f95462319c7cf4b6df0c202129058c574"} Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.217634 4812 generic.go:334] "Generic (PLEG): container finished" podID="39832661-a5f7-43f8-825a-3814ef674ee0" containerID="81ad61e8411cc7370c26134393db6dea0a2d56c2939f156ed8681355dd412d0b" exitCode=0 Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.217683 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76c5dc67ff-7h5h8" event={"ID":"39832661-a5f7-43f8-825a-3814ef674ee0","Type":"ContainerDied","Data":"81ad61e8411cc7370c26134393db6dea0a2d56c2939f156ed8681355dd412d0b"} Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.419935 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-dlrh4" Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.425786 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76c5dc67ff-7h5h8" Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.456597 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b-client-ca\") pod \"7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b\" (UID: \"7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b\") " Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.456666 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6j27b\" (UniqueName: \"kubernetes.io/projected/39832661-a5f7-43f8-825a-3814ef674ee0-kube-api-access-6j27b\") pod \"39832661-a5f7-43f8-825a-3814ef674ee0\" (UID: \"39832661-a5f7-43f8-825a-3814ef674ee0\") " Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.456691 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39832661-a5f7-43f8-825a-3814ef674ee0-client-ca\") pod \"39832661-a5f7-43f8-825a-3814ef674ee0\" (UID: \"39832661-a5f7-43f8-825a-3814ef674ee0\") " Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.456715 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39832661-a5f7-43f8-825a-3814ef674ee0-proxy-ca-bundles\") pod \"39832661-a5f7-43f8-825a-3814ef674ee0\" (UID: \"39832661-a5f7-43f8-825a-3814ef674ee0\") " Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.456730 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39832661-a5f7-43f8-825a-3814ef674ee0-serving-cert\") pod \"39832661-a5f7-43f8-825a-3814ef674ee0\" (UID: \"39832661-a5f7-43f8-825a-3814ef674ee0\") " Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.456745 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b-config\") pod \"7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b\" (UID: \"7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b\") " Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.456786 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b-serving-cert\") pod \"7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b\" (UID: \"7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b\") " Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.456804 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8t68\" (UniqueName: \"kubernetes.io/projected/7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b-kube-api-access-d8t68\") pod \"7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b\" (UID: \"7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b\") " Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.456820 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39832661-a5f7-43f8-825a-3814ef674ee0-config\") pod \"39832661-a5f7-43f8-825a-3814ef674ee0\" (UID: \"39832661-a5f7-43f8-825a-3814ef674ee0\") " Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.458261 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b-client-ca" (OuterVolumeSpecName: "client-ca") pod "7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b" (UID: "7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.460031 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b-config" (OuterVolumeSpecName: "config") pod "7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b" (UID: "7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.460727 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39832661-a5f7-43f8-825a-3814ef674ee0-client-ca" (OuterVolumeSpecName: "client-ca") pod "39832661-a5f7-43f8-825a-3814ef674ee0" (UID: "39832661-a5f7-43f8-825a-3814ef674ee0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.461110 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39832661-a5f7-43f8-825a-3814ef674ee0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "39832661-a5f7-43f8-825a-3814ef674ee0" (UID: "39832661-a5f7-43f8-825a-3814ef674ee0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.461221 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39832661-a5f7-43f8-825a-3814ef674ee0-config" (OuterVolumeSpecName: "config") pod "39832661-a5f7-43f8-825a-3814ef674ee0" (UID: "39832661-a5f7-43f8-825a-3814ef674ee0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.467620 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39832661-a5f7-43f8-825a-3814ef674ee0-kube-api-access-6j27b" (OuterVolumeSpecName: "kube-api-access-6j27b") pod "39832661-a5f7-43f8-825a-3814ef674ee0" (UID: "39832661-a5f7-43f8-825a-3814ef674ee0"). InnerVolumeSpecName "kube-api-access-6j27b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.469379 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39832661-a5f7-43f8-825a-3814ef674ee0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "39832661-a5f7-43f8-825a-3814ef674ee0" (UID: "39832661-a5f7-43f8-825a-3814ef674ee0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.475606 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b" (UID: "7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.477623 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b-kube-api-access-d8t68" (OuterVolumeSpecName: "kube-api-access-d8t68") pod "7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b" (UID: "7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b"). InnerVolumeSpecName "kube-api-access-d8t68". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.481508 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 13:35:00 crc kubenswrapper[4812]: E0216 13:35:00.481764 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39832661-a5f7-43f8-825a-3814ef674ee0" containerName="controller-manager" Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.481776 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="39832661-a5f7-43f8-825a-3814ef674ee0" containerName="controller-manager" Feb 16 13:35:00 crc kubenswrapper[4812]: E0216 13:35:00.481804 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b" containerName="route-controller-manager" Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.481813 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b" containerName="route-controller-manager" Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.481932 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="39832661-a5f7-43f8-825a-3814ef674ee0" containerName="controller-manager" Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.481947 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b" containerName="route-controller-manager" Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.482327 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.483464 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.485200 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.486065 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.558029 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0cc76715-e812-43f5-819b-ae595b966e01-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0cc76715-e812-43f5-819b-ae595b966e01\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.558086 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0cc76715-e812-43f5-819b-ae595b966e01-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0cc76715-e812-43f5-819b-ae595b966e01\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.558160 4812 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.558174 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6j27b\" (UniqueName: \"kubernetes.io/projected/39832661-a5f7-43f8-825a-3814ef674ee0-kube-api-access-6j27b\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.558246 4812 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39832661-a5f7-43f8-825a-3814ef674ee0-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.558288 4812 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39832661-a5f7-43f8-825a-3814ef674ee0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.558304 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39832661-a5f7-43f8-825a-3814ef674ee0-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.558317 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.558328 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.558340 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8t68\" (UniqueName: \"kubernetes.io/projected/7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b-kube-api-access-d8t68\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.558353 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39832661-a5f7-43f8-825a-3814ef674ee0-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.660140 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0cc76715-e812-43f5-819b-ae595b966e01-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0cc76715-e812-43f5-819b-ae595b966e01\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.660369 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0cc76715-e812-43f5-819b-ae595b966e01-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0cc76715-e812-43f5-819b-ae595b966e01\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.660524 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0cc76715-e812-43f5-819b-ae595b966e01-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0cc76715-e812-43f5-819b-ae595b966e01\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.688144 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0cc76715-e812-43f5-819b-ae595b966e01-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0cc76715-e812-43f5-819b-ae595b966e01\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.794993 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 13:35:00 crc kubenswrapper[4812]: I0216 13:35:00.797983 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.116131 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6b5879bb6b-fhngf"] Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.117325 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b5879bb6b-fhngf" Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.118871 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dcfbd7fb6-8qqsj"] Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.119605 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5dcfbd7fb6-8qqsj" Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.125432 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b5879bb6b-fhngf"] Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.128350 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dcfbd7fb6-8qqsj"] Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.216327 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.228712 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"0cc76715-e812-43f5-819b-ae595b966e01","Type":"ContainerStarted","Data":"541797dd5d24f1f83d442ad96200d73cbdf29063b64be077ec6f7d9b32cf8a6b"} Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.232188 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-dlrh4" event={"ID":"7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b","Type":"ContainerDied","Data":"7c6f8f9b6df56fd6c5cd34d5fd946380ceea2c0f5d5a4d2e02f8edfbe6274778"} Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.232283 4812 scope.go:117] "RemoveContainer" containerID="5ac8a599a65f276ab0d6e980a7cd3e4f95462319c7cf4b6df0c202129058c574" Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.232229 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-dlrh4" Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.234595 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76c5dc67ff-7h5h8" event={"ID":"39832661-a5f7-43f8-825a-3814ef674ee0","Type":"ContainerDied","Data":"fc688922f6eb596e1befb04e6ef1a8659600c77a556612c036434476cc3b6339"} Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.234679 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76c5dc67ff-7h5h8" Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.253816 4812 scope.go:117] "RemoveContainer" containerID="81ad61e8411cc7370c26134393db6dea0a2d56c2939f156ed8681355dd412d0b" Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.268733 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8a40075-5c56-4a05-90a9-f2740388eb58-config\") pod \"route-controller-manager-5dcfbd7fb6-8qqsj\" (UID: \"f8a40075-5c56-4a05-90a9-f2740388eb58\") " pod="openshift-route-controller-manager/route-controller-manager-5dcfbd7fb6-8qqsj" Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.268774 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8a40075-5c56-4a05-90a9-f2740388eb58-serving-cert\") pod \"route-controller-manager-5dcfbd7fb6-8qqsj\" (UID: \"f8a40075-5c56-4a05-90a9-f2740388eb58\") " pod="openshift-route-controller-manager/route-controller-manager-5dcfbd7fb6-8qqsj" Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.268806 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57c1e30a-fa85-4ab0-9e0c-d409f13f24d4-serving-cert\") pod \"controller-manager-6b5879bb6b-fhngf\" (UID: \"57c1e30a-fa85-4ab0-9e0c-d409f13f24d4\") " pod="openshift-controller-manager/controller-manager-6b5879bb6b-fhngf" Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.268830 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/57c1e30a-fa85-4ab0-9e0c-d409f13f24d4-client-ca\") pod \"controller-manager-6b5879bb6b-fhngf\" (UID: \"57c1e30a-fa85-4ab0-9e0c-d409f13f24d4\") " pod="openshift-controller-manager/controller-manager-6b5879bb6b-fhngf" Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.269531 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57c1e30a-fa85-4ab0-9e0c-d409f13f24d4-config\") pod \"controller-manager-6b5879bb6b-fhngf\" (UID: \"57c1e30a-fa85-4ab0-9e0c-d409f13f24d4\") " pod="openshift-controller-manager/controller-manager-6b5879bb6b-fhngf" Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.269593 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4x8z\" (UniqueName: \"kubernetes.io/projected/57c1e30a-fa85-4ab0-9e0c-d409f13f24d4-kube-api-access-q4x8z\") pod \"controller-manager-6b5879bb6b-fhngf\" (UID: \"57c1e30a-fa85-4ab0-9e0c-d409f13f24d4\") " pod="openshift-controller-manager/controller-manager-6b5879bb6b-fhngf" Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.269652 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/57c1e30a-fa85-4ab0-9e0c-d409f13f24d4-proxy-ca-bundles\") pod \"controller-manager-6b5879bb6b-fhngf\" (UID: \"57c1e30a-fa85-4ab0-9e0c-d409f13f24d4\") " pod="openshift-controller-manager/controller-manager-6b5879bb6b-fhngf" Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.269731 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f8a40075-5c56-4a05-90a9-f2740388eb58-client-ca\") pod \"route-controller-manager-5dcfbd7fb6-8qqsj\" (UID: \"f8a40075-5c56-4a05-90a9-f2740388eb58\") " pod="openshift-route-controller-manager/route-controller-manager-5dcfbd7fb6-8qqsj" Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.269766 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzblz\" (UniqueName: \"kubernetes.io/projected/f8a40075-5c56-4a05-90a9-f2740388eb58-kube-api-access-zzblz\") pod \"route-controller-manager-5dcfbd7fb6-8qqsj\" (UID: \"f8a40075-5c56-4a05-90a9-f2740388eb58\") " pod="openshift-route-controller-manager/route-controller-manager-5dcfbd7fb6-8qqsj" Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.293205 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-dlrh4"] Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.297674 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-dlrh4"] Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.306286 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-76c5dc67ff-7h5h8"] Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.309852 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-76c5dc67ff-7h5h8"] Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.370517 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4x8z\" (UniqueName: \"kubernetes.io/projected/57c1e30a-fa85-4ab0-9e0c-d409f13f24d4-kube-api-access-q4x8z\") pod \"controller-manager-6b5879bb6b-fhngf\" (UID: \"57c1e30a-fa85-4ab0-9e0c-d409f13f24d4\") " pod="openshift-controller-manager/controller-manager-6b5879bb6b-fhngf" Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.370575 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/57c1e30a-fa85-4ab0-9e0c-d409f13f24d4-proxy-ca-bundles\") pod \"controller-manager-6b5879bb6b-fhngf\" (UID: \"57c1e30a-fa85-4ab0-9e0c-d409f13f24d4\") " pod="openshift-controller-manager/controller-manager-6b5879bb6b-fhngf" Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.370620 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f8a40075-5c56-4a05-90a9-f2740388eb58-client-ca\") pod \"route-controller-manager-5dcfbd7fb6-8qqsj\" (UID: \"f8a40075-5c56-4a05-90a9-f2740388eb58\") " pod="openshift-route-controller-manager/route-controller-manager-5dcfbd7fb6-8qqsj" Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.370651 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzblz\" (UniqueName: \"kubernetes.io/projected/f8a40075-5c56-4a05-90a9-f2740388eb58-kube-api-access-zzblz\") pod \"route-controller-manager-5dcfbd7fb6-8qqsj\" (UID: \"f8a40075-5c56-4a05-90a9-f2740388eb58\") " pod="openshift-route-controller-manager/route-controller-manager-5dcfbd7fb6-8qqsj" Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.370814 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8a40075-5c56-4a05-90a9-f2740388eb58-serving-cert\") pod \"route-controller-manager-5dcfbd7fb6-8qqsj\" (UID: \"f8a40075-5c56-4a05-90a9-f2740388eb58\") " pod="openshift-route-controller-manager/route-controller-manager-5dcfbd7fb6-8qqsj" Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.372258 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8a40075-5c56-4a05-90a9-f2740388eb58-config\") pod \"route-controller-manager-5dcfbd7fb6-8qqsj\" (UID: \"f8a40075-5c56-4a05-90a9-f2740388eb58\") " pod="openshift-route-controller-manager/route-controller-manager-5dcfbd7fb6-8qqsj" Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.372262 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f8a40075-5c56-4a05-90a9-f2740388eb58-client-ca\") pod \"route-controller-manager-5dcfbd7fb6-8qqsj\" (UID: \"f8a40075-5c56-4a05-90a9-f2740388eb58\") " pod="openshift-route-controller-manager/route-controller-manager-5dcfbd7fb6-8qqsj" Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.372290 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57c1e30a-fa85-4ab0-9e0c-d409f13f24d4-serving-cert\") pod \"controller-manager-6b5879bb6b-fhngf\" (UID: \"57c1e30a-fa85-4ab0-9e0c-d409f13f24d4\") " pod="openshift-controller-manager/controller-manager-6b5879bb6b-fhngf" Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.372315 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/57c1e30a-fa85-4ab0-9e0c-d409f13f24d4-client-ca\") pod \"controller-manager-6b5879bb6b-fhngf\" (UID: \"57c1e30a-fa85-4ab0-9e0c-d409f13f24d4\") " pod="openshift-controller-manager/controller-manager-6b5879bb6b-fhngf" Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.372351 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57c1e30a-fa85-4ab0-9e0c-d409f13f24d4-config\") pod \"controller-manager-6b5879bb6b-fhngf\" (UID: \"57c1e30a-fa85-4ab0-9e0c-d409f13f24d4\") " pod="openshift-controller-manager/controller-manager-6b5879bb6b-fhngf" Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.382457 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/57c1e30a-fa85-4ab0-9e0c-d409f13f24d4-proxy-ca-bundles\") pod \"controller-manager-6b5879bb6b-fhngf\" (UID: \"57c1e30a-fa85-4ab0-9e0c-d409f13f24d4\") " pod="openshift-controller-manager/controller-manager-6b5879bb6b-fhngf" Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.382557 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8a40075-5c56-4a05-90a9-f2740388eb58-config\") pod \"route-controller-manager-5dcfbd7fb6-8qqsj\" (UID: \"f8a40075-5c56-4a05-90a9-f2740388eb58\") " pod="openshift-route-controller-manager/route-controller-manager-5dcfbd7fb6-8qqsj" Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.383335 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57c1e30a-fa85-4ab0-9e0c-d409f13f24d4-serving-cert\") pod \"controller-manager-6b5879bb6b-fhngf\" (UID: \"57c1e30a-fa85-4ab0-9e0c-d409f13f24d4\") " pod="openshift-controller-manager/controller-manager-6b5879bb6b-fhngf" Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.383336 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8a40075-5c56-4a05-90a9-f2740388eb58-serving-cert\") pod \"route-controller-manager-5dcfbd7fb6-8qqsj\" (UID: \"f8a40075-5c56-4a05-90a9-f2740388eb58\") " pod="openshift-route-controller-manager/route-controller-manager-5dcfbd7fb6-8qqsj" Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.384025 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/57c1e30a-fa85-4ab0-9e0c-d409f13f24d4-client-ca\") pod \"controller-manager-6b5879bb6b-fhngf\" (UID: \"57c1e30a-fa85-4ab0-9e0c-d409f13f24d4\") " pod="openshift-controller-manager/controller-manager-6b5879bb6b-fhngf" Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.384660 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57c1e30a-fa85-4ab0-9e0c-d409f13f24d4-config\") pod \"controller-manager-6b5879bb6b-fhngf\" (UID: \"57c1e30a-fa85-4ab0-9e0c-d409f13f24d4\") " pod="openshift-controller-manager/controller-manager-6b5879bb6b-fhngf" Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.386515 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4x8z\" (UniqueName: \"kubernetes.io/projected/57c1e30a-fa85-4ab0-9e0c-d409f13f24d4-kube-api-access-q4x8z\") pod \"controller-manager-6b5879bb6b-fhngf\" (UID: \"57c1e30a-fa85-4ab0-9e0c-d409f13f24d4\") " pod="openshift-controller-manager/controller-manager-6b5879bb6b-fhngf" Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.387928 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzblz\" (UniqueName: \"kubernetes.io/projected/f8a40075-5c56-4a05-90a9-f2740388eb58-kube-api-access-zzblz\") pod \"route-controller-manager-5dcfbd7fb6-8qqsj\" (UID: \"f8a40075-5c56-4a05-90a9-f2740388eb58\") " pod="openshift-route-controller-manager/route-controller-manager-5dcfbd7fb6-8qqsj" Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.462268 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b5879bb6b-fhngf" Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.469430 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5dcfbd7fb6-8qqsj" Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.600879 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zt4tm" Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.601224 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zt4tm" Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.655110 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b5879bb6b-fhngf"] Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.746227 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zt4tm" Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.885060 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39832661-a5f7-43f8-825a-3814ef674ee0" path="/var/lib/kubelet/pods/39832661-a5f7-43f8-825a-3814ef674ee0/volumes" Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.885796 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b" path="/var/lib/kubelet/pods/7a0c99fc-1734-47c4-99d2-f26f8f5b9d5b/volumes" Feb 16 13:35:01 crc kubenswrapper[4812]: I0216 13:35:01.986233 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dcfbd7fb6-8qqsj"] Feb 16 13:35:02 crc kubenswrapper[4812]: I0216 13:35:02.241415 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b5879bb6b-fhngf" event={"ID":"57c1e30a-fa85-4ab0-9e0c-d409f13f24d4","Type":"ContainerStarted","Data":"700b33c0dce6243a18f1da5e9c721699c16582fef4ce16606cd27be683d39862"} Feb 16 13:35:02 crc kubenswrapper[4812]: I0216 13:35:02.241724 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b5879bb6b-fhngf" event={"ID":"57c1e30a-fa85-4ab0-9e0c-d409f13f24d4","Type":"ContainerStarted","Data":"5ab51404baf57057518a07b7f2d39756a2eea6a40871976cebb3cb8f669d8a56"} Feb 16 13:35:02 crc kubenswrapper[4812]: I0216 13:35:02.242240 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6b5879bb6b-fhngf" Feb 16 13:35:02 crc kubenswrapper[4812]: I0216 13:35:02.245509 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5dcfbd7fb6-8qqsj" event={"ID":"f8a40075-5c56-4a05-90a9-f2740388eb58","Type":"ContainerStarted","Data":"179030cd4122da3ab3b2f06540361d3e3fae9aa53195700682738df7ca9af315"} Feb 16 13:35:02 crc kubenswrapper[4812]: I0216 13:35:02.245548 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5dcfbd7fb6-8qqsj" event={"ID":"f8a40075-5c56-4a05-90a9-f2740388eb58","Type":"ContainerStarted","Data":"7310e1596f670f2dd8cb37dfd97f6e436427513b7ef0561424debf75e060a62c"} Feb 16 13:35:02 crc kubenswrapper[4812]: I0216 13:35:02.246368 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5dcfbd7fb6-8qqsj" Feb 16 13:35:02 crc kubenswrapper[4812]: I0216 13:35:02.247746 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6b5879bb6b-fhngf" Feb 16 13:35:02 crc kubenswrapper[4812]: I0216 13:35:02.249255 4812 generic.go:334] "Generic (PLEG): container finished" podID="0cc76715-e812-43f5-819b-ae595b966e01" containerID="983eda9a370015c5b23c20c3fbfca1dbf0bafed80ef40084bbdbadf507de58f5" exitCode=0 Feb 16 13:35:02 crc kubenswrapper[4812]: I0216 13:35:02.249280 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"0cc76715-e812-43f5-819b-ae595b966e01","Type":"ContainerDied","Data":"983eda9a370015c5b23c20c3fbfca1dbf0bafed80ef40084bbdbadf507de58f5"} Feb 16 13:35:02 crc kubenswrapper[4812]: I0216 13:35:02.259219 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6b5879bb6b-fhngf" podStartSLOduration=3.259199527 podStartE2EDuration="3.259199527s" podCreationTimestamp="2026-02-16 13:34:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:35:02.258998401 +0000 UTC m=+191.323329102" watchObservedRunningTime="2026-02-16 13:35:02.259199527 +0000 UTC m=+191.323530228" Feb 16 13:35:02 crc kubenswrapper[4812]: I0216 13:35:02.308907 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zt4tm" Feb 16 13:35:02 crc kubenswrapper[4812]: I0216 13:35:02.310216 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5dcfbd7fb6-8qqsj" podStartSLOduration=2.3102039149999998 podStartE2EDuration="2.310203915s" podCreationTimestamp="2026-02-16 13:35:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:35:02.293961831 +0000 UTC m=+191.358292632" watchObservedRunningTime="2026-02-16 13:35:02.310203915 +0000 UTC m=+191.374534616" Feb 16 13:35:02 crc kubenswrapper[4812]: I0216 13:35:02.757823 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5dcfbd7fb6-8qqsj" Feb 16 13:35:03 crc kubenswrapper[4812]: I0216 13:35:03.262249 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zt4tm"] Feb 16 13:35:03 crc kubenswrapper[4812]: I0216 13:35:03.538180 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 13:35:03 crc kubenswrapper[4812]: I0216 13:35:03.703009 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0cc76715-e812-43f5-819b-ae595b966e01-kube-api-access\") pod \"0cc76715-e812-43f5-819b-ae595b966e01\" (UID: \"0cc76715-e812-43f5-819b-ae595b966e01\") " Feb 16 13:35:03 crc kubenswrapper[4812]: I0216 13:35:03.703052 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0cc76715-e812-43f5-819b-ae595b966e01-kubelet-dir\") pod \"0cc76715-e812-43f5-819b-ae595b966e01\" (UID: \"0cc76715-e812-43f5-819b-ae595b966e01\") " Feb 16 13:35:03 crc kubenswrapper[4812]: I0216 13:35:03.703227 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0cc76715-e812-43f5-819b-ae595b966e01-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0cc76715-e812-43f5-819b-ae595b966e01" (UID: "0cc76715-e812-43f5-819b-ae595b966e01"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:35:03 crc kubenswrapper[4812]: I0216 13:35:03.703369 4812 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0cc76715-e812-43f5-819b-ae595b966e01-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:03 crc kubenswrapper[4812]: I0216 13:35:03.708181 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cc76715-e812-43f5-819b-ae595b966e01-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0cc76715-e812-43f5-819b-ae595b966e01" (UID: "0cc76715-e812-43f5-819b-ae595b966e01"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:35:03 crc kubenswrapper[4812]: I0216 13:35:03.804905 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0cc76715-e812-43f5-819b-ae595b966e01-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:04 crc kubenswrapper[4812]: I0216 13:35:04.275991 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 13:35:04 crc kubenswrapper[4812]: I0216 13:35:04.275990 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"0cc76715-e812-43f5-819b-ae595b966e01","Type":"ContainerDied","Data":"541797dd5d24f1f83d442ad96200d73cbdf29063b64be077ec6f7d9b32cf8a6b"} Feb 16 13:35:04 crc kubenswrapper[4812]: I0216 13:35:04.276604 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="541797dd5d24f1f83d442ad96200d73cbdf29063b64be077ec6f7d9b32cf8a6b" Feb 16 13:35:04 crc kubenswrapper[4812]: I0216 13:35:04.276309 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zt4tm" podUID="c25320a0-e0f3-40ae-b953-e249556bc4f6" containerName="registry-server" containerID="cri-o://12edc8073ea3225e00c47ffe27a771c12e43b82705d2c261411711f2d02e2232" gracePeriod=2 Feb 16 13:35:04 crc kubenswrapper[4812]: I0216 13:35:04.402029 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4mg2p"] Feb 16 13:35:04 crc kubenswrapper[4812]: I0216 13:35:04.609491 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lw64z" Feb 16 13:35:04 crc kubenswrapper[4812]: I0216 13:35:04.609804 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lw64z" Feb 16 13:35:04 crc kubenswrapper[4812]: I0216 13:35:04.680393 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lw64z" Feb 16 13:35:04 crc kubenswrapper[4812]: I0216 13:35:04.776996 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zt4tm" Feb 16 13:35:04 crc kubenswrapper[4812]: I0216 13:35:04.919014 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c25320a0-e0f3-40ae-b953-e249556bc4f6-catalog-content\") pod \"c25320a0-e0f3-40ae-b953-e249556bc4f6\" (UID: \"c25320a0-e0f3-40ae-b953-e249556bc4f6\") " Feb 16 13:35:04 crc kubenswrapper[4812]: I0216 13:35:04.919113 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c25320a0-e0f3-40ae-b953-e249556bc4f6-utilities\") pod \"c25320a0-e0f3-40ae-b953-e249556bc4f6\" (UID: \"c25320a0-e0f3-40ae-b953-e249556bc4f6\") " Feb 16 13:35:04 crc kubenswrapper[4812]: I0216 13:35:04.919143 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9v9kv\" (UniqueName: \"kubernetes.io/projected/c25320a0-e0f3-40ae-b953-e249556bc4f6-kube-api-access-9v9kv\") pod \"c25320a0-e0f3-40ae-b953-e249556bc4f6\" (UID: \"c25320a0-e0f3-40ae-b953-e249556bc4f6\") " Feb 16 13:35:04 crc kubenswrapper[4812]: I0216 13:35:04.919932 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c25320a0-e0f3-40ae-b953-e249556bc4f6-utilities" (OuterVolumeSpecName: "utilities") pod "c25320a0-e0f3-40ae-b953-e249556bc4f6" (UID: "c25320a0-e0f3-40ae-b953-e249556bc4f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:35:04 crc kubenswrapper[4812]: I0216 13:35:04.923970 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c25320a0-e0f3-40ae-b953-e249556bc4f6-kube-api-access-9v9kv" (OuterVolumeSpecName: "kube-api-access-9v9kv") pod "c25320a0-e0f3-40ae-b953-e249556bc4f6" (UID: "c25320a0-e0f3-40ae-b953-e249556bc4f6"). InnerVolumeSpecName "kube-api-access-9v9kv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:35:04 crc kubenswrapper[4812]: I0216 13:35:04.980735 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c25320a0-e0f3-40ae-b953-e249556bc4f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c25320a0-e0f3-40ae-b953-e249556bc4f6" (UID: "c25320a0-e0f3-40ae-b953-e249556bc4f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:35:05 crc kubenswrapper[4812]: I0216 13:35:05.020009 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c25320a0-e0f3-40ae-b953-e249556bc4f6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:05 crc kubenswrapper[4812]: I0216 13:35:05.020039 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c25320a0-e0f3-40ae-b953-e249556bc4f6-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:05 crc kubenswrapper[4812]: I0216 13:35:05.020050 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9v9kv\" (UniqueName: \"kubernetes.io/projected/c25320a0-e0f3-40ae-b953-e249556bc4f6-kube-api-access-9v9kv\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:05 crc kubenswrapper[4812]: I0216 13:35:05.063372 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 13:35:05 crc kubenswrapper[4812]: E0216 13:35:05.063662 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c25320a0-e0f3-40ae-b953-e249556bc4f6" containerName="registry-server" Feb 16 13:35:05 crc kubenswrapper[4812]: I0216 13:35:05.063681 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c25320a0-e0f3-40ae-b953-e249556bc4f6" containerName="registry-server" Feb 16 13:35:05 crc kubenswrapper[4812]: E0216 13:35:05.063691 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cc76715-e812-43f5-819b-ae595b966e01" containerName="pruner" Feb 16 13:35:05 crc kubenswrapper[4812]: I0216 13:35:05.063699 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cc76715-e812-43f5-819b-ae595b966e01" containerName="pruner" Feb 16 13:35:05 crc kubenswrapper[4812]: E0216 13:35:05.063712 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c25320a0-e0f3-40ae-b953-e249556bc4f6" containerName="extract-content" Feb 16 13:35:05 crc kubenswrapper[4812]: I0216 13:35:05.063719 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c25320a0-e0f3-40ae-b953-e249556bc4f6" containerName="extract-content" Feb 16 13:35:05 crc kubenswrapper[4812]: E0216 13:35:05.063741 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c25320a0-e0f3-40ae-b953-e249556bc4f6" containerName="extract-utilities" Feb 16 13:35:05 crc kubenswrapper[4812]: I0216 13:35:05.063749 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c25320a0-e0f3-40ae-b953-e249556bc4f6" containerName="extract-utilities" Feb 16 13:35:05 crc kubenswrapper[4812]: I0216 13:35:05.063862 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cc76715-e812-43f5-819b-ae595b966e01" containerName="pruner" Feb 16 13:35:05 crc kubenswrapper[4812]: I0216 13:35:05.063881 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c25320a0-e0f3-40ae-b953-e249556bc4f6" containerName="registry-server" Feb 16 13:35:05 crc kubenswrapper[4812]: I0216 13:35:05.066156 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 13:35:05 crc kubenswrapper[4812]: I0216 13:35:05.070922 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 13:35:05 crc kubenswrapper[4812]: I0216 13:35:05.071096 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 13:35:05 crc kubenswrapper[4812]: I0216 13:35:05.080324 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 13:35:05 crc kubenswrapper[4812]: I0216 13:35:05.222297 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4158c95-a923-4240-a8bc-f9c44270275e-kube-api-access\") pod \"installer-9-crc\" (UID: \"e4158c95-a923-4240-a8bc-f9c44270275e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 13:35:05 crc kubenswrapper[4812]: I0216 13:35:05.222384 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e4158c95-a923-4240-a8bc-f9c44270275e-var-lock\") pod \"installer-9-crc\" (UID: \"e4158c95-a923-4240-a8bc-f9c44270275e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 13:35:05 crc kubenswrapper[4812]: I0216 13:35:05.222496 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e4158c95-a923-4240-a8bc-f9c44270275e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e4158c95-a923-4240-a8bc-f9c44270275e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 13:35:05 crc kubenswrapper[4812]: I0216 13:35:05.284213 4812 generic.go:334] "Generic (PLEG): container finished" podID="c25320a0-e0f3-40ae-b953-e249556bc4f6" containerID="12edc8073ea3225e00c47ffe27a771c12e43b82705d2c261411711f2d02e2232" exitCode=0 Feb 16 13:35:05 crc kubenswrapper[4812]: I0216 13:35:05.284254 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zt4tm" event={"ID":"c25320a0-e0f3-40ae-b953-e249556bc4f6","Type":"ContainerDied","Data":"12edc8073ea3225e00c47ffe27a771c12e43b82705d2c261411711f2d02e2232"} Feb 16 13:35:05 crc kubenswrapper[4812]: I0216 13:35:05.284303 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zt4tm" event={"ID":"c25320a0-e0f3-40ae-b953-e249556bc4f6","Type":"ContainerDied","Data":"1f936fb7a3ace551d9e18a255de6c23597b073c922952265438d5afb9cb40541"} Feb 16 13:35:05 crc kubenswrapper[4812]: I0216 13:35:05.284298 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zt4tm" Feb 16 13:35:05 crc kubenswrapper[4812]: I0216 13:35:05.284345 4812 scope.go:117] "RemoveContainer" containerID="12edc8073ea3225e00c47ffe27a771c12e43b82705d2c261411711f2d02e2232" Feb 16 13:35:05 crc kubenswrapper[4812]: I0216 13:35:05.320506 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zt4tm"] Feb 16 13:35:05 crc kubenswrapper[4812]: I0216 13:35:05.323425 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zt4tm"] Feb 16 13:35:05 crc kubenswrapper[4812]: I0216 13:35:05.323815 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4158c95-a923-4240-a8bc-f9c44270275e-kube-api-access\") pod \"installer-9-crc\" (UID: \"e4158c95-a923-4240-a8bc-f9c44270275e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 13:35:05 crc kubenswrapper[4812]: I0216 13:35:05.323873 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e4158c95-a923-4240-a8bc-f9c44270275e-var-lock\") pod \"installer-9-crc\" (UID: \"e4158c95-a923-4240-a8bc-f9c44270275e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 13:35:05 crc kubenswrapper[4812]: I0216 13:35:05.323916 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e4158c95-a923-4240-a8bc-f9c44270275e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e4158c95-a923-4240-a8bc-f9c44270275e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 13:35:05 crc kubenswrapper[4812]: I0216 13:35:05.323987 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e4158c95-a923-4240-a8bc-f9c44270275e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e4158c95-a923-4240-a8bc-f9c44270275e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 13:35:05 crc kubenswrapper[4812]: I0216 13:35:05.324279 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e4158c95-a923-4240-a8bc-f9c44270275e-var-lock\") pod \"installer-9-crc\" (UID: \"e4158c95-a923-4240-a8bc-f9c44270275e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 13:35:05 crc kubenswrapper[4812]: I0216 13:35:05.328481 4812 scope.go:117] "RemoveContainer" containerID="3619fc495b87e40610f213629f283e31bdcb79c94f7aeda343c25e6b698b146e" Feb 16 13:35:05 crc kubenswrapper[4812]: I0216 13:35:05.339200 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lw64z" Feb 16 13:35:05 crc kubenswrapper[4812]: I0216 13:35:05.347420 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4158c95-a923-4240-a8bc-f9c44270275e-kube-api-access\") pod \"installer-9-crc\" (UID: \"e4158c95-a923-4240-a8bc-f9c44270275e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 13:35:05 crc kubenswrapper[4812]: I0216 13:35:05.350212 4812 scope.go:117] "RemoveContainer" containerID="f2032441992b3698e54b610589005cacc36456cb0ab4bff4e34711e25b6b7cf0" Feb 16 13:35:05 crc kubenswrapper[4812]: I0216 13:35:05.383107 4812 scope.go:117] "RemoveContainer" containerID="12edc8073ea3225e00c47ffe27a771c12e43b82705d2c261411711f2d02e2232" Feb 16 13:35:05 crc kubenswrapper[4812]: E0216 13:35:05.383850 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12edc8073ea3225e00c47ffe27a771c12e43b82705d2c261411711f2d02e2232\": container with ID starting with 12edc8073ea3225e00c47ffe27a771c12e43b82705d2c261411711f2d02e2232 not found: ID does not exist" containerID="12edc8073ea3225e00c47ffe27a771c12e43b82705d2c261411711f2d02e2232" Feb 16 13:35:05 crc kubenswrapper[4812]: I0216 13:35:05.383907 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12edc8073ea3225e00c47ffe27a771c12e43b82705d2c261411711f2d02e2232"} err="failed to get container status \"12edc8073ea3225e00c47ffe27a771c12e43b82705d2c261411711f2d02e2232\": rpc error: code = NotFound desc = could not find container \"12edc8073ea3225e00c47ffe27a771c12e43b82705d2c261411711f2d02e2232\": container with ID starting with 12edc8073ea3225e00c47ffe27a771c12e43b82705d2c261411711f2d02e2232 not found: ID does not exist" Feb 16 13:35:05 crc kubenswrapper[4812]: I0216 13:35:05.383976 4812 scope.go:117] "RemoveContainer" containerID="3619fc495b87e40610f213629f283e31bdcb79c94f7aeda343c25e6b698b146e" Feb 16 13:35:05 crc kubenswrapper[4812]: E0216 13:35:05.384403 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3619fc495b87e40610f213629f283e31bdcb79c94f7aeda343c25e6b698b146e\": container with ID starting with 3619fc495b87e40610f213629f283e31bdcb79c94f7aeda343c25e6b698b146e not found: ID does not exist" containerID="3619fc495b87e40610f213629f283e31bdcb79c94f7aeda343c25e6b698b146e" Feb 16 13:35:05 crc kubenswrapper[4812]: I0216 13:35:05.384430 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3619fc495b87e40610f213629f283e31bdcb79c94f7aeda343c25e6b698b146e"} err="failed to get container status \"3619fc495b87e40610f213629f283e31bdcb79c94f7aeda343c25e6b698b146e\": rpc error: code = NotFound desc = could not find container \"3619fc495b87e40610f213629f283e31bdcb79c94f7aeda343c25e6b698b146e\": container with ID starting with 3619fc495b87e40610f213629f283e31bdcb79c94f7aeda343c25e6b698b146e not found: ID does not exist" Feb 16 13:35:05 crc kubenswrapper[4812]: I0216 13:35:05.384464 4812 scope.go:117] "RemoveContainer" containerID="f2032441992b3698e54b610589005cacc36456cb0ab4bff4e34711e25b6b7cf0" Feb 16 13:35:05 crc kubenswrapper[4812]: E0216 13:35:05.385025 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2032441992b3698e54b610589005cacc36456cb0ab4bff4e34711e25b6b7cf0\": container with ID starting with f2032441992b3698e54b610589005cacc36456cb0ab4bff4e34711e25b6b7cf0 not found: ID does not exist" containerID="f2032441992b3698e54b610589005cacc36456cb0ab4bff4e34711e25b6b7cf0" Feb 16 13:35:05 crc kubenswrapper[4812]: I0216 13:35:05.385087 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2032441992b3698e54b610589005cacc36456cb0ab4bff4e34711e25b6b7cf0"} err="failed to get container status \"f2032441992b3698e54b610589005cacc36456cb0ab4bff4e34711e25b6b7cf0\": rpc error: code = NotFound desc = could not find container \"f2032441992b3698e54b610589005cacc36456cb0ab4bff4e34711e25b6b7cf0\": container with ID starting with f2032441992b3698e54b610589005cacc36456cb0ab4bff4e34711e25b6b7cf0 not found: ID does not exist" Feb 16 13:35:05 crc kubenswrapper[4812]: I0216 13:35:05.386524 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 13:35:05 crc kubenswrapper[4812]: I0216 13:35:05.886592 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c25320a0-e0f3-40ae-b953-e249556bc4f6" path="/var/lib/kubelet/pods/c25320a0-e0f3-40ae-b953-e249556bc4f6/volumes" Feb 16 13:35:05 crc kubenswrapper[4812]: I0216 13:35:05.887576 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 13:35:06 crc kubenswrapper[4812]: I0216 13:35:06.293957 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e4158c95-a923-4240-a8bc-f9c44270275e","Type":"ContainerStarted","Data":"056a53ea3f3ece5be2b8c240f485b975b2ff0a4875f941bf118b4388f6ccbbbe"} Feb 16 13:35:06 crc kubenswrapper[4812]: I0216 13:35:06.294269 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e4158c95-a923-4240-a8bc-f9c44270275e","Type":"ContainerStarted","Data":"a312e14b6eac708160181cb07b0ae5eb045f3a0deccaf3d61f848e277857b7ba"} Feb 16 13:35:06 crc kubenswrapper[4812]: I0216 13:35:06.309310 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.309288945 podStartE2EDuration="1.309288945s" podCreationTimestamp="2026-02-16 13:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:35:06.307951596 +0000 UTC m=+195.372282307" watchObservedRunningTime="2026-02-16 13:35:06.309288945 +0000 UTC m=+195.373619646" Feb 16 13:35:06 crc kubenswrapper[4812]: I0216 13:35:06.461150 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lw64z"] Feb 16 13:35:07 crc kubenswrapper[4812]: I0216 13:35:07.297227 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lw64z" podUID="68168164-88dd-4c28-824f-e1702db05aea" containerName="registry-server" containerID="cri-o://9dc88e8eb87f53bd523817880084a3d1d47e457aa036736dad7b03360c76b25f" gracePeriod=2 Feb 16 13:35:07 crc kubenswrapper[4812]: I0216 13:35:07.749974 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lw64z" Feb 16 13:35:07 crc kubenswrapper[4812]: I0216 13:35:07.858249 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68168164-88dd-4c28-824f-e1702db05aea-catalog-content\") pod \"68168164-88dd-4c28-824f-e1702db05aea\" (UID: \"68168164-88dd-4c28-824f-e1702db05aea\") " Feb 16 13:35:07 crc kubenswrapper[4812]: I0216 13:35:07.858328 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68168164-88dd-4c28-824f-e1702db05aea-utilities\") pod \"68168164-88dd-4c28-824f-e1702db05aea\" (UID: \"68168164-88dd-4c28-824f-e1702db05aea\") " Feb 16 13:35:07 crc kubenswrapper[4812]: I0216 13:35:07.858368 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9t7x4\" (UniqueName: \"kubernetes.io/projected/68168164-88dd-4c28-824f-e1702db05aea-kube-api-access-9t7x4\") pod \"68168164-88dd-4c28-824f-e1702db05aea\" (UID: \"68168164-88dd-4c28-824f-e1702db05aea\") " Feb 16 13:35:07 crc kubenswrapper[4812]: I0216 13:35:07.859003 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68168164-88dd-4c28-824f-e1702db05aea-utilities" (OuterVolumeSpecName: "utilities") pod "68168164-88dd-4c28-824f-e1702db05aea" (UID: "68168164-88dd-4c28-824f-e1702db05aea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:35:07 crc kubenswrapper[4812]: I0216 13:35:07.859565 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68168164-88dd-4c28-824f-e1702db05aea-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:07 crc kubenswrapper[4812]: I0216 13:35:07.864117 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68168164-88dd-4c28-824f-e1702db05aea-kube-api-access-9t7x4" (OuterVolumeSpecName: "kube-api-access-9t7x4") pod "68168164-88dd-4c28-824f-e1702db05aea" (UID: "68168164-88dd-4c28-824f-e1702db05aea"). InnerVolumeSpecName "kube-api-access-9t7x4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:35:07 crc kubenswrapper[4812]: I0216 13:35:07.961174 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9t7x4\" (UniqueName: \"kubernetes.io/projected/68168164-88dd-4c28-824f-e1702db05aea-kube-api-access-9t7x4\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:07 crc kubenswrapper[4812]: I0216 13:35:07.991460 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68168164-88dd-4c28-824f-e1702db05aea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "68168164-88dd-4c28-824f-e1702db05aea" (UID: "68168164-88dd-4c28-824f-e1702db05aea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:35:08 crc kubenswrapper[4812]: I0216 13:35:08.062970 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68168164-88dd-4c28-824f-e1702db05aea-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:08 crc kubenswrapper[4812]: I0216 13:35:08.303772 4812 generic.go:334] "Generic (PLEG): container finished" podID="68168164-88dd-4c28-824f-e1702db05aea" containerID="9dc88e8eb87f53bd523817880084a3d1d47e457aa036736dad7b03360c76b25f" exitCode=0 Feb 16 13:35:08 crc kubenswrapper[4812]: I0216 13:35:08.303845 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lw64z" Feb 16 13:35:08 crc kubenswrapper[4812]: I0216 13:35:08.303871 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lw64z" event={"ID":"68168164-88dd-4c28-824f-e1702db05aea","Type":"ContainerDied","Data":"9dc88e8eb87f53bd523817880084a3d1d47e457aa036736dad7b03360c76b25f"} Feb 16 13:35:08 crc kubenswrapper[4812]: I0216 13:35:08.304102 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lw64z" event={"ID":"68168164-88dd-4c28-824f-e1702db05aea","Type":"ContainerDied","Data":"8591abf65d798941bb85fdb8d72141a746d54422eea8d59092312439f402602b"} Feb 16 13:35:08 crc kubenswrapper[4812]: I0216 13:35:08.304124 4812 scope.go:117] "RemoveContainer" containerID="9dc88e8eb87f53bd523817880084a3d1d47e457aa036736dad7b03360c76b25f" Feb 16 13:35:08 crc kubenswrapper[4812]: I0216 13:35:08.306307 4812 generic.go:334] "Generic (PLEG): container finished" podID="2984d252-d29e-49b5-87ed-9ce7d19edc6d" containerID="1a26a6535f9fa9d65575d2c892045a0a7ae14e1e81452a06a7bd9f3ae6746df3" exitCode=0 Feb 16 13:35:08 crc kubenswrapper[4812]: I0216 13:35:08.306358 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t9zmh" event={"ID":"2984d252-d29e-49b5-87ed-9ce7d19edc6d","Type":"ContainerDied","Data":"1a26a6535f9fa9d65575d2c892045a0a7ae14e1e81452a06a7bd9f3ae6746df3"} Feb 16 13:35:08 crc kubenswrapper[4812]: I0216 13:35:08.318690 4812 scope.go:117] "RemoveContainer" containerID="edf0c3822f080ddb2b355578cda6d93bf5e1917c89fe43fc0afc354e0d669095" Feb 16 13:35:08 crc kubenswrapper[4812]: I0216 13:35:08.343911 4812 scope.go:117] "RemoveContainer" containerID="4c595c972a862b05758226cac26d669c828114cb8a3d6c37614970811d5bc39d" Feb 16 13:35:08 crc kubenswrapper[4812]: I0216 13:35:08.346212 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lw64z"] Feb 16 13:35:08 crc kubenswrapper[4812]: I0216 13:35:08.349847 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lw64z"] Feb 16 13:35:08 crc kubenswrapper[4812]: I0216 13:35:08.355716 4812 scope.go:117] "RemoveContainer" containerID="9dc88e8eb87f53bd523817880084a3d1d47e457aa036736dad7b03360c76b25f" Feb 16 13:35:08 crc kubenswrapper[4812]: E0216 13:35:08.356045 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dc88e8eb87f53bd523817880084a3d1d47e457aa036736dad7b03360c76b25f\": container with ID starting with 9dc88e8eb87f53bd523817880084a3d1d47e457aa036736dad7b03360c76b25f not found: ID does not exist" containerID="9dc88e8eb87f53bd523817880084a3d1d47e457aa036736dad7b03360c76b25f" Feb 16 13:35:08 crc kubenswrapper[4812]: I0216 13:35:08.356084 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dc88e8eb87f53bd523817880084a3d1d47e457aa036736dad7b03360c76b25f"} err="failed to get container status \"9dc88e8eb87f53bd523817880084a3d1d47e457aa036736dad7b03360c76b25f\": rpc error: code = NotFound desc = could not find container \"9dc88e8eb87f53bd523817880084a3d1d47e457aa036736dad7b03360c76b25f\": container with ID starting with 9dc88e8eb87f53bd523817880084a3d1d47e457aa036736dad7b03360c76b25f not found: ID does not exist" Feb 16 13:35:08 crc kubenswrapper[4812]: I0216 13:35:08.356114 4812 scope.go:117] "RemoveContainer" containerID="edf0c3822f080ddb2b355578cda6d93bf5e1917c89fe43fc0afc354e0d669095" Feb 16 13:35:08 crc kubenswrapper[4812]: E0216 13:35:08.356354 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edf0c3822f080ddb2b355578cda6d93bf5e1917c89fe43fc0afc354e0d669095\": container with ID starting with edf0c3822f080ddb2b355578cda6d93bf5e1917c89fe43fc0afc354e0d669095 not found: ID does not exist" containerID="edf0c3822f080ddb2b355578cda6d93bf5e1917c89fe43fc0afc354e0d669095" Feb 16 13:35:08 crc kubenswrapper[4812]: I0216 13:35:08.356374 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edf0c3822f080ddb2b355578cda6d93bf5e1917c89fe43fc0afc354e0d669095"} err="failed to get container status \"edf0c3822f080ddb2b355578cda6d93bf5e1917c89fe43fc0afc354e0d669095\": rpc error: code = NotFound desc = could not find container \"edf0c3822f080ddb2b355578cda6d93bf5e1917c89fe43fc0afc354e0d669095\": container with ID starting with edf0c3822f080ddb2b355578cda6d93bf5e1917c89fe43fc0afc354e0d669095 not found: ID does not exist" Feb 16 13:35:08 crc kubenswrapper[4812]: I0216 13:35:08.356390 4812 scope.go:117] "RemoveContainer" containerID="4c595c972a862b05758226cac26d669c828114cb8a3d6c37614970811d5bc39d" Feb 16 13:35:08 crc kubenswrapper[4812]: E0216 13:35:08.356932 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c595c972a862b05758226cac26d669c828114cb8a3d6c37614970811d5bc39d\": container with ID starting with 4c595c972a862b05758226cac26d669c828114cb8a3d6c37614970811d5bc39d not found: ID does not exist" containerID="4c595c972a862b05758226cac26d669c828114cb8a3d6c37614970811d5bc39d" Feb 16 13:35:08 crc kubenswrapper[4812]: I0216 13:35:08.356956 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c595c972a862b05758226cac26d669c828114cb8a3d6c37614970811d5bc39d"} err="failed to get container status \"4c595c972a862b05758226cac26d669c828114cb8a3d6c37614970811d5bc39d\": rpc error: code = NotFound desc = could not find container \"4c595c972a862b05758226cac26d669c828114cb8a3d6c37614970811d5bc39d\": container with ID starting with 4c595c972a862b05758226cac26d669c828114cb8a3d6c37614970811d5bc39d not found: ID does not exist" Feb 16 13:35:09 crc kubenswrapper[4812]: I0216 13:35:09.313035 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t9zmh" event={"ID":"2984d252-d29e-49b5-87ed-9ce7d19edc6d","Type":"ContainerStarted","Data":"d9a430ff631e78c5495fe28e0ee2a6e74352dbc43d3c25fa783905541e8adeb8"} Feb 16 13:35:09 crc kubenswrapper[4812]: I0216 13:35:09.315177 4812 generic.go:334] "Generic (PLEG): container finished" podID="567e2fcc-e342-41e9-a406-4758f7c5551e" containerID="c5cde2928e76ed34ec0873c532eb57bc500471adc517cf29b2cdc9ff26cd725c" exitCode=0 Feb 16 13:35:09 crc kubenswrapper[4812]: I0216 13:35:09.315202 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gfhfv" event={"ID":"567e2fcc-e342-41e9-a406-4758f7c5551e","Type":"ContainerDied","Data":"c5cde2928e76ed34ec0873c532eb57bc500471adc517cf29b2cdc9ff26cd725c"} Feb 16 13:35:09 crc kubenswrapper[4812]: I0216 13:35:09.333776 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-t9zmh" podStartSLOduration=3.328011833 podStartE2EDuration="49.333758052s" podCreationTimestamp="2026-02-16 13:34:20 +0000 UTC" firstStartedPulling="2026-02-16 13:34:22.67135644 +0000 UTC m=+151.735687141" lastFinishedPulling="2026-02-16 13:35:08.677102659 +0000 UTC m=+197.741433360" observedRunningTime="2026-02-16 13:35:09.331263438 +0000 UTC m=+198.395594149" watchObservedRunningTime="2026-02-16 13:35:09.333758052 +0000 UTC m=+198.398088763" Feb 16 13:35:09 crc kubenswrapper[4812]: I0216 13:35:09.889998 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68168164-88dd-4c28-824f-e1702db05aea" path="/var/lib/kubelet/pods/68168164-88dd-4c28-824f-e1702db05aea/volumes" Feb 16 13:35:10 crc kubenswrapper[4812]: I0216 13:35:10.321327 4812 generic.go:334] "Generic (PLEG): container finished" podID="a297c2d9-88a8-4019-94f5-c1f5498bee86" containerID="3a06eaf7fdb4d05744ed3eca6f60920e828573eafbbf1a081f69d46f05441696" exitCode=0 Feb 16 13:35:10 crc kubenswrapper[4812]: I0216 13:35:10.321382 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cqhcl" event={"ID":"a297c2d9-88a8-4019-94f5-c1f5498bee86","Type":"ContainerDied","Data":"3a06eaf7fdb4d05744ed3eca6f60920e828573eafbbf1a081f69d46f05441696"} Feb 16 13:35:10 crc kubenswrapper[4812]: I0216 13:35:10.324830 4812 generic.go:334] "Generic (PLEG): container finished" podID="1b3c17cd-2607-4379-9ff0-5ad26cfca6ce" containerID="1b9fd1189389c3723f97e0dc52f73f122778955e8a82c8c51ee1ac2df6466f5d" exitCode=0 Feb 16 13:35:10 crc kubenswrapper[4812]: I0216 13:35:10.325560 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbfqw" event={"ID":"1b3c17cd-2607-4379-9ff0-5ad26cfca6ce","Type":"ContainerDied","Data":"1b9fd1189389c3723f97e0dc52f73f122778955e8a82c8c51ee1ac2df6466f5d"} Feb 16 13:35:10 crc kubenswrapper[4812]: I0216 13:35:10.328996 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gfhfv" event={"ID":"567e2fcc-e342-41e9-a406-4758f7c5551e","Type":"ContainerStarted","Data":"c5bb604f6590b8644e494f5182c4c1c2ba349637a33d8ed037a109968ac8efc3"} Feb 16 13:35:10 crc kubenswrapper[4812]: I0216 13:35:10.331596 4812 generic.go:334] "Generic (PLEG): container finished" podID="f4e6d69a-43ea-4b9b-a150-640b86bfbf42" containerID="380b01cd27500959fda3b9ff3f3a14731ad906b2f357aba0103177e16e77bc16" exitCode=0 Feb 16 13:35:10 crc kubenswrapper[4812]: I0216 13:35:10.331626 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxh5t" event={"ID":"f4e6d69a-43ea-4b9b-a150-640b86bfbf42","Type":"ContainerDied","Data":"380b01cd27500959fda3b9ff3f3a14731ad906b2f357aba0103177e16e77bc16"} Feb 16 13:35:10 crc kubenswrapper[4812]: I0216 13:35:10.377059 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gfhfv" podStartSLOduration=3.240870627 podStartE2EDuration="50.377040217s" podCreationTimestamp="2026-02-16 13:34:20 +0000 UTC" firstStartedPulling="2026-02-16 13:34:22.68262103 +0000 UTC m=+151.746951741" lastFinishedPulling="2026-02-16 13:35:09.81879063 +0000 UTC m=+198.883121331" observedRunningTime="2026-02-16 13:35:10.372526452 +0000 UTC m=+199.436857163" watchObservedRunningTime="2026-02-16 13:35:10.377040217 +0000 UTC m=+199.441370928" Feb 16 13:35:10 crc kubenswrapper[4812]: I0216 13:35:10.997756 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gfhfv" Feb 16 13:35:10 crc kubenswrapper[4812]: I0216 13:35:10.998185 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gfhfv" Feb 16 13:35:11 crc kubenswrapper[4812]: I0216 13:35:11.179037 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-t9zmh" Feb 16 13:35:11 crc kubenswrapper[4812]: I0216 13:35:11.179098 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-t9zmh" Feb 16 13:35:11 crc kubenswrapper[4812]: I0216 13:35:11.221135 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-t9zmh" Feb 16 13:35:11 crc kubenswrapper[4812]: I0216 13:35:11.340615 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxh5t" event={"ID":"f4e6d69a-43ea-4b9b-a150-640b86bfbf42","Type":"ContainerStarted","Data":"08d35b6093da03d2b92182fa9bf61c1f2b525b664a60bc95ecdb179791eec7eb"} Feb 16 13:35:11 crc kubenswrapper[4812]: I0216 13:35:11.342721 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cqhcl" event={"ID":"a297c2d9-88a8-4019-94f5-c1f5498bee86","Type":"ContainerStarted","Data":"4f5aa6f991a550486ac5ebd3aff4ba4ecb90d50f3311f426b006a90935908626"} Feb 16 13:35:11 crc kubenswrapper[4812]: I0216 13:35:11.346946 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbfqw" event={"ID":"1b3c17cd-2607-4379-9ff0-5ad26cfca6ce","Type":"ContainerStarted","Data":"2a47df43b56b2bf43f35492ef36b24c2fc2d53e5430fc405d26bd8c7d811d475"} Feb 16 13:35:11 crc kubenswrapper[4812]: I0216 13:35:11.348627 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fjz4f" event={"ID":"c1a9695b-636b-4b29-a6dd-4e0708706b74","Type":"ContainerStarted","Data":"8181840cd3084798a64ba8046f1bbf1e3e6ddc841abbec235ea39855c82de027"} Feb 16 13:35:11 crc kubenswrapper[4812]: I0216 13:35:11.371776 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cxh5t" podStartSLOduration=2.2764775569999998 podStartE2EDuration="50.371754806s" podCreationTimestamp="2026-02-16 13:34:21 +0000 UTC" firstStartedPulling="2026-02-16 13:34:22.691213567 +0000 UTC m=+151.755544268" lastFinishedPulling="2026-02-16 13:35:10.786490816 +0000 UTC m=+199.850821517" observedRunningTime="2026-02-16 13:35:11.367988003 +0000 UTC m=+200.432318714" watchObservedRunningTime="2026-02-16 13:35:11.371754806 +0000 UTC m=+200.436085507" Feb 16 13:35:11 crc kubenswrapper[4812]: I0216 13:35:11.391863 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lbfqw" podStartSLOduration=2.2543640050000002 podStartE2EDuration="48.391847153s" podCreationTimestamp="2026-02-16 13:34:23 +0000 UTC" firstStartedPulling="2026-02-16 13:34:24.792018192 +0000 UTC m=+153.856348893" lastFinishedPulling="2026-02-16 13:35:10.92950134 +0000 UTC m=+199.993832041" observedRunningTime="2026-02-16 13:35:11.388480313 +0000 UTC m=+200.452811014" watchObservedRunningTime="2026-02-16 13:35:11.391847153 +0000 UTC m=+200.456177854" Feb 16 13:35:11 crc kubenswrapper[4812]: I0216 13:35:11.431607 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cqhcl" podStartSLOduration=3.546255896 podStartE2EDuration="49.431588735s" podCreationTimestamp="2026-02-16 13:34:22 +0000 UTC" firstStartedPulling="2026-02-16 13:34:24.851768007 +0000 UTC m=+153.916098708" lastFinishedPulling="2026-02-16 13:35:10.737100856 +0000 UTC m=+199.801431547" observedRunningTime="2026-02-16 13:35:11.427767822 +0000 UTC m=+200.492098543" watchObservedRunningTime="2026-02-16 13:35:11.431588735 +0000 UTC m=+200.495919436" Feb 16 13:35:11 crc kubenswrapper[4812]: I0216 13:35:11.436455 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cxh5t" Feb 16 13:35:11 crc kubenswrapper[4812]: I0216 13:35:11.436502 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cxh5t" Feb 16 13:35:12 crc kubenswrapper[4812]: I0216 13:35:12.046608 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-gfhfv" podUID="567e2fcc-e342-41e9-a406-4758f7c5551e" containerName="registry-server" probeResult="failure" output=< Feb 16 13:35:12 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Feb 16 13:35:12 crc kubenswrapper[4812]: > Feb 16 13:35:12 crc kubenswrapper[4812]: I0216 13:35:12.358153 4812 generic.go:334] "Generic (PLEG): container finished" podID="c1a9695b-636b-4b29-a6dd-4e0708706b74" containerID="8181840cd3084798a64ba8046f1bbf1e3e6ddc841abbec235ea39855c82de027" exitCode=0 Feb 16 13:35:12 crc kubenswrapper[4812]: I0216 13:35:12.358237 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fjz4f" event={"ID":"c1a9695b-636b-4b29-a6dd-4e0708706b74","Type":"ContainerDied","Data":"8181840cd3084798a64ba8046f1bbf1e3e6ddc841abbec235ea39855c82de027"} Feb 16 13:35:12 crc kubenswrapper[4812]: I0216 13:35:12.477251 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-cxh5t" podUID="f4e6d69a-43ea-4b9b-a150-640b86bfbf42" containerName="registry-server" probeResult="failure" output=< Feb 16 13:35:12 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Feb 16 13:35:12 crc kubenswrapper[4812]: > Feb 16 13:35:13 crc kubenswrapper[4812]: I0216 13:35:13.224125 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cqhcl" Feb 16 13:35:13 crc kubenswrapper[4812]: I0216 13:35:13.224386 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cqhcl" Feb 16 13:35:13 crc kubenswrapper[4812]: I0216 13:35:13.266178 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cqhcl" Feb 16 13:35:13 crc kubenswrapper[4812]: I0216 13:35:13.365173 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fjz4f" event={"ID":"c1a9695b-636b-4b29-a6dd-4e0708706b74","Type":"ContainerStarted","Data":"2375994974e7fd5af92b6e11947c9e40cbeeb5f3774b9a88742bec12ffb25c76"} Feb 16 13:35:13 crc kubenswrapper[4812]: I0216 13:35:13.383689 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fjz4f" podStartSLOduration=3.462903153 podStartE2EDuration="50.383670354s" podCreationTimestamp="2026-02-16 13:34:23 +0000 UTC" firstStartedPulling="2026-02-16 13:34:25.866118301 +0000 UTC m=+154.930449012" lastFinishedPulling="2026-02-16 13:35:12.786885512 +0000 UTC m=+201.851216213" observedRunningTime="2026-02-16 13:35:13.381405727 +0000 UTC m=+202.445736428" watchObservedRunningTime="2026-02-16 13:35:13.383670354 +0000 UTC m=+202.448001055" Feb 16 13:35:13 crc kubenswrapper[4812]: I0216 13:35:13.563367 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lbfqw" Feb 16 13:35:13 crc kubenswrapper[4812]: I0216 13:35:13.563415 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lbfqw" Feb 16 13:35:13 crc kubenswrapper[4812]: I0216 13:35:13.604651 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lbfqw" Feb 16 13:35:14 crc kubenswrapper[4812]: I0216 13:35:14.228734 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fjz4f" Feb 16 13:35:14 crc kubenswrapper[4812]: I0216 13:35:14.229042 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fjz4f" Feb 16 13:35:14 crc kubenswrapper[4812]: I0216 13:35:14.549309 4812 patch_prober.go:28] interesting pod/machine-config-daemon-c6mn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:35:14 crc kubenswrapper[4812]: I0216 13:35:14.549377 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:35:14 crc kubenswrapper[4812]: I0216 13:35:14.549431 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" Feb 16 13:35:14 crc kubenswrapper[4812]: I0216 13:35:14.550072 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0fe6225f5f96d52b832cfbd8ecb30748dfcd053b9d95a512bfbe0c6c4aa7c0e6"} pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 13:35:14 crc kubenswrapper[4812]: I0216 13:35:14.550159 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" containerID="cri-o://0fe6225f5f96d52b832cfbd8ecb30748dfcd053b9d95a512bfbe0c6c4aa7c0e6" gracePeriod=600 Feb 16 13:35:15 crc kubenswrapper[4812]: I0216 13:35:15.267747 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fjz4f" podUID="c1a9695b-636b-4b29-a6dd-4e0708706b74" containerName="registry-server" probeResult="failure" output=< Feb 16 13:35:15 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Feb 16 13:35:15 crc kubenswrapper[4812]: > Feb 16 13:35:15 crc kubenswrapper[4812]: I0216 13:35:15.397048 4812 generic.go:334] "Generic (PLEG): container finished" podID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerID="0fe6225f5f96d52b832cfbd8ecb30748dfcd053b9d95a512bfbe0c6c4aa7c0e6" exitCode=0 Feb 16 13:35:15 crc kubenswrapper[4812]: I0216 13:35:15.397412 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" event={"ID":"3c55e49a-a30d-4950-a690-c33d9f8a31e0","Type":"ContainerDied","Data":"0fe6225f5f96d52b832cfbd8ecb30748dfcd053b9d95a512bfbe0c6c4aa7c0e6"} Feb 16 13:35:16 crc kubenswrapper[4812]: I0216 13:35:16.403467 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" event={"ID":"3c55e49a-a30d-4950-a690-c33d9f8a31e0","Type":"ContainerStarted","Data":"0dea0551bdc1dbe8171150e4ea91a5f7a4c6365d605948c214a9ad8e715fdd89"} Feb 16 13:35:19 crc kubenswrapper[4812]: I0216 13:35:19.905202 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6b5879bb6b-fhngf"] Feb 16 13:35:19 crc kubenswrapper[4812]: I0216 13:35:19.906140 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6b5879bb6b-fhngf" podUID="57c1e30a-fa85-4ab0-9e0c-d409f13f24d4" containerName="controller-manager" containerID="cri-o://700b33c0dce6243a18f1da5e9c721699c16582fef4ce16606cd27be683d39862" gracePeriod=30 Feb 16 13:35:19 crc kubenswrapper[4812]: I0216 13:35:19.927944 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dcfbd7fb6-8qqsj"] Feb 16 13:35:19 crc kubenswrapper[4812]: I0216 13:35:19.928187 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5dcfbd7fb6-8qqsj" podUID="f8a40075-5c56-4a05-90a9-f2740388eb58" containerName="route-controller-manager" containerID="cri-o://179030cd4122da3ab3b2f06540361d3e3fae9aa53195700682738df7ca9af315" gracePeriod=30 Feb 16 13:35:20 crc kubenswrapper[4812]: I0216 13:35:20.424021 4812 generic.go:334] "Generic (PLEG): container finished" podID="57c1e30a-fa85-4ab0-9e0c-d409f13f24d4" containerID="700b33c0dce6243a18f1da5e9c721699c16582fef4ce16606cd27be683d39862" exitCode=0 Feb 16 13:35:20 crc kubenswrapper[4812]: I0216 13:35:20.424292 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b5879bb6b-fhngf" event={"ID":"57c1e30a-fa85-4ab0-9e0c-d409f13f24d4","Type":"ContainerDied","Data":"700b33c0dce6243a18f1da5e9c721699c16582fef4ce16606cd27be683d39862"} Feb 16 13:35:20 crc kubenswrapper[4812]: I0216 13:35:20.426138 4812 generic.go:334] "Generic (PLEG): container finished" podID="f8a40075-5c56-4a05-90a9-f2740388eb58" containerID="179030cd4122da3ab3b2f06540361d3e3fae9aa53195700682738df7ca9af315" exitCode=0 Feb 16 13:35:20 crc kubenswrapper[4812]: I0216 13:35:20.426196 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5dcfbd7fb6-8qqsj" event={"ID":"f8a40075-5c56-4a05-90a9-f2740388eb58","Type":"ContainerDied","Data":"179030cd4122da3ab3b2f06540361d3e3fae9aa53195700682738df7ca9af315"} Feb 16 13:35:20 crc kubenswrapper[4812]: I0216 13:35:20.426227 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5dcfbd7fb6-8qqsj" event={"ID":"f8a40075-5c56-4a05-90a9-f2740388eb58","Type":"ContainerDied","Data":"7310e1596f670f2dd8cb37dfd97f6e436427513b7ef0561424debf75e060a62c"} Feb 16 13:35:20 crc kubenswrapper[4812]: I0216 13:35:20.426241 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7310e1596f670f2dd8cb37dfd97f6e436427513b7ef0561424debf75e060a62c" Feb 16 13:35:20 crc kubenswrapper[4812]: I0216 13:35:20.440728 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5dcfbd7fb6-8qqsj" Feb 16 13:35:20 crc kubenswrapper[4812]: I0216 13:35:20.519552 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8a40075-5c56-4a05-90a9-f2740388eb58-config\") pod \"f8a40075-5c56-4a05-90a9-f2740388eb58\" (UID: \"f8a40075-5c56-4a05-90a9-f2740388eb58\") " Feb 16 13:35:20 crc kubenswrapper[4812]: I0216 13:35:20.519670 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8a40075-5c56-4a05-90a9-f2740388eb58-serving-cert\") pod \"f8a40075-5c56-4a05-90a9-f2740388eb58\" (UID: \"f8a40075-5c56-4a05-90a9-f2740388eb58\") " Feb 16 13:35:20 crc kubenswrapper[4812]: I0216 13:35:20.519701 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzblz\" (UniqueName: \"kubernetes.io/projected/f8a40075-5c56-4a05-90a9-f2740388eb58-kube-api-access-zzblz\") pod \"f8a40075-5c56-4a05-90a9-f2740388eb58\" (UID: \"f8a40075-5c56-4a05-90a9-f2740388eb58\") " Feb 16 13:35:20 crc kubenswrapper[4812]: I0216 13:35:20.519725 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f8a40075-5c56-4a05-90a9-f2740388eb58-client-ca\") pod \"f8a40075-5c56-4a05-90a9-f2740388eb58\" (UID: \"f8a40075-5c56-4a05-90a9-f2740388eb58\") " Feb 16 13:35:20 crc kubenswrapper[4812]: I0216 13:35:20.520729 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8a40075-5c56-4a05-90a9-f2740388eb58-client-ca" (OuterVolumeSpecName: "client-ca") pod "f8a40075-5c56-4a05-90a9-f2740388eb58" (UID: "f8a40075-5c56-4a05-90a9-f2740388eb58"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:35:20 crc kubenswrapper[4812]: I0216 13:35:20.521170 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8a40075-5c56-4a05-90a9-f2740388eb58-config" (OuterVolumeSpecName: "config") pod "f8a40075-5c56-4a05-90a9-f2740388eb58" (UID: "f8a40075-5c56-4a05-90a9-f2740388eb58"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:35:20 crc kubenswrapper[4812]: I0216 13:35:20.522371 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b5879bb6b-fhngf" Feb 16 13:35:20 crc kubenswrapper[4812]: I0216 13:35:20.525791 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8a40075-5c56-4a05-90a9-f2740388eb58-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f8a40075-5c56-4a05-90a9-f2740388eb58" (UID: "f8a40075-5c56-4a05-90a9-f2740388eb58"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:35:20 crc kubenswrapper[4812]: I0216 13:35:20.525970 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8a40075-5c56-4a05-90a9-f2740388eb58-kube-api-access-zzblz" (OuterVolumeSpecName: "kube-api-access-zzblz") pod "f8a40075-5c56-4a05-90a9-f2740388eb58" (UID: "f8a40075-5c56-4a05-90a9-f2740388eb58"). InnerVolumeSpecName "kube-api-access-zzblz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:35:20 crc kubenswrapper[4812]: I0216 13:35:20.620428 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/57c1e30a-fa85-4ab0-9e0c-d409f13f24d4-client-ca\") pod \"57c1e30a-fa85-4ab0-9e0c-d409f13f24d4\" (UID: \"57c1e30a-fa85-4ab0-9e0c-d409f13f24d4\") " Feb 16 13:35:20 crc kubenswrapper[4812]: I0216 13:35:20.621384 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57c1e30a-fa85-4ab0-9e0c-d409f13f24d4-config\") pod \"57c1e30a-fa85-4ab0-9e0c-d409f13f24d4\" (UID: \"57c1e30a-fa85-4ab0-9e0c-d409f13f24d4\") " Feb 16 13:35:20 crc kubenswrapper[4812]: I0216 13:35:20.621992 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4x8z\" (UniqueName: \"kubernetes.io/projected/57c1e30a-fa85-4ab0-9e0c-d409f13f24d4-kube-api-access-q4x8z\") pod \"57c1e30a-fa85-4ab0-9e0c-d409f13f24d4\" (UID: \"57c1e30a-fa85-4ab0-9e0c-d409f13f24d4\") " Feb 16 13:35:20 crc kubenswrapper[4812]: I0216 13:35:20.622100 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57c1e30a-fa85-4ab0-9e0c-d409f13f24d4-serving-cert\") pod \"57c1e30a-fa85-4ab0-9e0c-d409f13f24d4\" (UID: \"57c1e30a-fa85-4ab0-9e0c-d409f13f24d4\") " Feb 16 13:35:20 crc kubenswrapper[4812]: I0216 13:35:20.622197 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/57c1e30a-fa85-4ab0-9e0c-d409f13f24d4-proxy-ca-bundles\") pod \"57c1e30a-fa85-4ab0-9e0c-d409f13f24d4\" (UID: \"57c1e30a-fa85-4ab0-9e0c-d409f13f24d4\") " Feb 16 13:35:20 crc kubenswrapper[4812]: I0216 13:35:20.622499 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8a40075-5c56-4a05-90a9-f2740388eb58-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:20 crc kubenswrapper[4812]: I0216 13:35:20.622847 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzblz\" (UniqueName: \"kubernetes.io/projected/f8a40075-5c56-4a05-90a9-f2740388eb58-kube-api-access-zzblz\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:20 crc kubenswrapper[4812]: I0216 13:35:20.623241 4812 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f8a40075-5c56-4a05-90a9-f2740388eb58-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:20 crc kubenswrapper[4812]: I0216 13:35:20.621327 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57c1e30a-fa85-4ab0-9e0c-d409f13f24d4-client-ca" (OuterVolumeSpecName: "client-ca") pod "57c1e30a-fa85-4ab0-9e0c-d409f13f24d4" (UID: "57c1e30a-fa85-4ab0-9e0c-d409f13f24d4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:35:20 crc kubenswrapper[4812]: I0216 13:35:20.621920 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57c1e30a-fa85-4ab0-9e0c-d409f13f24d4-config" (OuterVolumeSpecName: "config") pod "57c1e30a-fa85-4ab0-9e0c-d409f13f24d4" (UID: "57c1e30a-fa85-4ab0-9e0c-d409f13f24d4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:35:20 crc kubenswrapper[4812]: I0216 13:35:20.623130 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57c1e30a-fa85-4ab0-9e0c-d409f13f24d4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "57c1e30a-fa85-4ab0-9e0c-d409f13f24d4" (UID: "57c1e30a-fa85-4ab0-9e0c-d409f13f24d4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:35:20 crc kubenswrapper[4812]: I0216 13:35:20.623320 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8a40075-5c56-4a05-90a9-f2740388eb58-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:20 crc kubenswrapper[4812]: I0216 13:35:20.625170 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57c1e30a-fa85-4ab0-9e0c-d409f13f24d4-kube-api-access-q4x8z" (OuterVolumeSpecName: "kube-api-access-q4x8z") pod "57c1e30a-fa85-4ab0-9e0c-d409f13f24d4" (UID: "57c1e30a-fa85-4ab0-9e0c-d409f13f24d4"). InnerVolumeSpecName "kube-api-access-q4x8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:35:20 crc kubenswrapper[4812]: I0216 13:35:20.625791 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57c1e30a-fa85-4ab0-9e0c-d409f13f24d4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "57c1e30a-fa85-4ab0-9e0c-d409f13f24d4" (UID: "57c1e30a-fa85-4ab0-9e0c-d409f13f24d4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:35:20 crc kubenswrapper[4812]: I0216 13:35:20.725003 4812 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/57c1e30a-fa85-4ab0-9e0c-d409f13f24d4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:20 crc kubenswrapper[4812]: I0216 13:35:20.725043 4812 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/57c1e30a-fa85-4ab0-9e0c-d409f13f24d4-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:20 crc kubenswrapper[4812]: I0216 13:35:20.725055 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57c1e30a-fa85-4ab0-9e0c-d409f13f24d4-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:20 crc kubenswrapper[4812]: I0216 13:35:20.725065 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4x8z\" (UniqueName: \"kubernetes.io/projected/57c1e30a-fa85-4ab0-9e0c-d409f13f24d4-kube-api-access-q4x8z\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:20 crc kubenswrapper[4812]: I0216 13:35:20.725077 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57c1e30a-fa85-4ab0-9e0c-d409f13f24d4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.044781 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gfhfv" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.081370 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gfhfv" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.126469 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8bb46d75d-dphhm"] Feb 16 13:35:21 crc kubenswrapper[4812]: E0216 13:35:21.126751 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8a40075-5c56-4a05-90a9-f2740388eb58" containerName="route-controller-manager" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.126770 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8a40075-5c56-4a05-90a9-f2740388eb58" containerName="route-controller-manager" Feb 16 13:35:21 crc kubenswrapper[4812]: E0216 13:35:21.126787 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68168164-88dd-4c28-824f-e1702db05aea" containerName="extract-content" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.126795 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="68168164-88dd-4c28-824f-e1702db05aea" containerName="extract-content" Feb 16 13:35:21 crc kubenswrapper[4812]: E0216 13:35:21.126811 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57c1e30a-fa85-4ab0-9e0c-d409f13f24d4" containerName="controller-manager" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.126820 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="57c1e30a-fa85-4ab0-9e0c-d409f13f24d4" containerName="controller-manager" Feb 16 13:35:21 crc kubenswrapper[4812]: E0216 13:35:21.126842 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68168164-88dd-4c28-824f-e1702db05aea" containerName="extract-utilities" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.126850 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="68168164-88dd-4c28-824f-e1702db05aea" containerName="extract-utilities" Feb 16 13:35:21 crc kubenswrapper[4812]: E0216 13:35:21.126866 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68168164-88dd-4c28-824f-e1702db05aea" containerName="registry-server" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.126875 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="68168164-88dd-4c28-824f-e1702db05aea" containerName="registry-server" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.127023 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="57c1e30a-fa85-4ab0-9e0c-d409f13f24d4" containerName="controller-manager" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.127044 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="68168164-88dd-4c28-824f-e1702db05aea" containerName="registry-server" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.127058 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8a40075-5c56-4a05-90a9-f2740388eb58" containerName="route-controller-manager" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.127537 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8bb46d75d-dphhm" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.144662 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85fb4db986-fbpt8"] Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.145819 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85fb4db986-fbpt8" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.153633 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8bb46d75d-dphhm"] Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.158986 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85fb4db986-fbpt8"] Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.221843 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-t9zmh" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.230895 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctb2t\" (UniqueName: \"kubernetes.io/projected/898afeea-6bdb-425f-af8d-5397b1c0ce5f-kube-api-access-ctb2t\") pod \"route-controller-manager-85fb4db986-fbpt8\" (UID: \"898afeea-6bdb-425f-af8d-5397b1c0ce5f\") " pod="openshift-route-controller-manager/route-controller-manager-85fb4db986-fbpt8" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.230969 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c0dea8c-4cf0-448f-8438-c32062604ce4-serving-cert\") pod \"controller-manager-8bb46d75d-dphhm\" (UID: \"9c0dea8c-4cf0-448f-8438-c32062604ce4\") " pod="openshift-controller-manager/controller-manager-8bb46d75d-dphhm" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.231002 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c0dea8c-4cf0-448f-8438-c32062604ce4-config\") pod \"controller-manager-8bb46d75d-dphhm\" (UID: \"9c0dea8c-4cf0-448f-8438-c32062604ce4\") " pod="openshift-controller-manager/controller-manager-8bb46d75d-dphhm" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.231066 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/898afeea-6bdb-425f-af8d-5397b1c0ce5f-config\") pod \"route-controller-manager-85fb4db986-fbpt8\" (UID: \"898afeea-6bdb-425f-af8d-5397b1c0ce5f\") " pod="openshift-route-controller-manager/route-controller-manager-85fb4db986-fbpt8" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.231099 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n445v\" (UniqueName: \"kubernetes.io/projected/9c0dea8c-4cf0-448f-8438-c32062604ce4-kube-api-access-n445v\") pod \"controller-manager-8bb46d75d-dphhm\" (UID: \"9c0dea8c-4cf0-448f-8438-c32062604ce4\") " pod="openshift-controller-manager/controller-manager-8bb46d75d-dphhm" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.231134 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/898afeea-6bdb-425f-af8d-5397b1c0ce5f-serving-cert\") pod \"route-controller-manager-85fb4db986-fbpt8\" (UID: \"898afeea-6bdb-425f-af8d-5397b1c0ce5f\") " pod="openshift-route-controller-manager/route-controller-manager-85fb4db986-fbpt8" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.231167 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/898afeea-6bdb-425f-af8d-5397b1c0ce5f-client-ca\") pod \"route-controller-manager-85fb4db986-fbpt8\" (UID: \"898afeea-6bdb-425f-af8d-5397b1c0ce5f\") " pod="openshift-route-controller-manager/route-controller-manager-85fb4db986-fbpt8" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.231195 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c0dea8c-4cf0-448f-8438-c32062604ce4-proxy-ca-bundles\") pod \"controller-manager-8bb46d75d-dphhm\" (UID: \"9c0dea8c-4cf0-448f-8438-c32062604ce4\") " pod="openshift-controller-manager/controller-manager-8bb46d75d-dphhm" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.231223 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c0dea8c-4cf0-448f-8438-c32062604ce4-client-ca\") pod \"controller-manager-8bb46d75d-dphhm\" (UID: \"9c0dea8c-4cf0-448f-8438-c32062604ce4\") " pod="openshift-controller-manager/controller-manager-8bb46d75d-dphhm" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.332123 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/898afeea-6bdb-425f-af8d-5397b1c0ce5f-serving-cert\") pod \"route-controller-manager-85fb4db986-fbpt8\" (UID: \"898afeea-6bdb-425f-af8d-5397b1c0ce5f\") " pod="openshift-route-controller-manager/route-controller-manager-85fb4db986-fbpt8" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.332173 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/898afeea-6bdb-425f-af8d-5397b1c0ce5f-client-ca\") pod \"route-controller-manager-85fb4db986-fbpt8\" (UID: \"898afeea-6bdb-425f-af8d-5397b1c0ce5f\") " pod="openshift-route-controller-manager/route-controller-manager-85fb4db986-fbpt8" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.332200 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c0dea8c-4cf0-448f-8438-c32062604ce4-proxy-ca-bundles\") pod \"controller-manager-8bb46d75d-dphhm\" (UID: \"9c0dea8c-4cf0-448f-8438-c32062604ce4\") " pod="openshift-controller-manager/controller-manager-8bb46d75d-dphhm" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.332220 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c0dea8c-4cf0-448f-8438-c32062604ce4-client-ca\") pod \"controller-manager-8bb46d75d-dphhm\" (UID: \"9c0dea8c-4cf0-448f-8438-c32062604ce4\") " pod="openshift-controller-manager/controller-manager-8bb46d75d-dphhm" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.332250 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctb2t\" (UniqueName: \"kubernetes.io/projected/898afeea-6bdb-425f-af8d-5397b1c0ce5f-kube-api-access-ctb2t\") pod \"route-controller-manager-85fb4db986-fbpt8\" (UID: \"898afeea-6bdb-425f-af8d-5397b1c0ce5f\") " pod="openshift-route-controller-manager/route-controller-manager-85fb4db986-fbpt8" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.332294 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c0dea8c-4cf0-448f-8438-c32062604ce4-serving-cert\") pod \"controller-manager-8bb46d75d-dphhm\" (UID: \"9c0dea8c-4cf0-448f-8438-c32062604ce4\") " pod="openshift-controller-manager/controller-manager-8bb46d75d-dphhm" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.332313 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c0dea8c-4cf0-448f-8438-c32062604ce4-config\") pod \"controller-manager-8bb46d75d-dphhm\" (UID: \"9c0dea8c-4cf0-448f-8438-c32062604ce4\") " pod="openshift-controller-manager/controller-manager-8bb46d75d-dphhm" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.332339 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/898afeea-6bdb-425f-af8d-5397b1c0ce5f-config\") pod \"route-controller-manager-85fb4db986-fbpt8\" (UID: \"898afeea-6bdb-425f-af8d-5397b1c0ce5f\") " pod="openshift-route-controller-manager/route-controller-manager-85fb4db986-fbpt8" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.332357 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n445v\" (UniqueName: \"kubernetes.io/projected/9c0dea8c-4cf0-448f-8438-c32062604ce4-kube-api-access-n445v\") pod \"controller-manager-8bb46d75d-dphhm\" (UID: \"9c0dea8c-4cf0-448f-8438-c32062604ce4\") " pod="openshift-controller-manager/controller-manager-8bb46d75d-dphhm" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.333584 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c0dea8c-4cf0-448f-8438-c32062604ce4-client-ca\") pod \"controller-manager-8bb46d75d-dphhm\" (UID: \"9c0dea8c-4cf0-448f-8438-c32062604ce4\") " pod="openshift-controller-manager/controller-manager-8bb46d75d-dphhm" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.333596 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/898afeea-6bdb-425f-af8d-5397b1c0ce5f-client-ca\") pod \"route-controller-manager-85fb4db986-fbpt8\" (UID: \"898afeea-6bdb-425f-af8d-5397b1c0ce5f\") " pod="openshift-route-controller-manager/route-controller-manager-85fb4db986-fbpt8" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.334220 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c0dea8c-4cf0-448f-8438-c32062604ce4-config\") pod \"controller-manager-8bb46d75d-dphhm\" (UID: \"9c0dea8c-4cf0-448f-8438-c32062604ce4\") " pod="openshift-controller-manager/controller-manager-8bb46d75d-dphhm" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.334260 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/898afeea-6bdb-425f-af8d-5397b1c0ce5f-config\") pod \"route-controller-manager-85fb4db986-fbpt8\" (UID: \"898afeea-6bdb-425f-af8d-5397b1c0ce5f\") " pod="openshift-route-controller-manager/route-controller-manager-85fb4db986-fbpt8" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.334377 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c0dea8c-4cf0-448f-8438-c32062604ce4-proxy-ca-bundles\") pod \"controller-manager-8bb46d75d-dphhm\" (UID: \"9c0dea8c-4cf0-448f-8438-c32062604ce4\") " pod="openshift-controller-manager/controller-manager-8bb46d75d-dphhm" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.338834 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/898afeea-6bdb-425f-af8d-5397b1c0ce5f-serving-cert\") pod \"route-controller-manager-85fb4db986-fbpt8\" (UID: \"898afeea-6bdb-425f-af8d-5397b1c0ce5f\") " pod="openshift-route-controller-manager/route-controller-manager-85fb4db986-fbpt8" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.340167 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c0dea8c-4cf0-448f-8438-c32062604ce4-serving-cert\") pod \"controller-manager-8bb46d75d-dphhm\" (UID: \"9c0dea8c-4cf0-448f-8438-c32062604ce4\") " pod="openshift-controller-manager/controller-manager-8bb46d75d-dphhm" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.352726 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctb2t\" (UniqueName: \"kubernetes.io/projected/898afeea-6bdb-425f-af8d-5397b1c0ce5f-kube-api-access-ctb2t\") pod \"route-controller-manager-85fb4db986-fbpt8\" (UID: \"898afeea-6bdb-425f-af8d-5397b1c0ce5f\") " pod="openshift-route-controller-manager/route-controller-manager-85fb4db986-fbpt8" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.352906 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n445v\" (UniqueName: \"kubernetes.io/projected/9c0dea8c-4cf0-448f-8438-c32062604ce4-kube-api-access-n445v\") pod \"controller-manager-8bb46d75d-dphhm\" (UID: \"9c0dea8c-4cf0-448f-8438-c32062604ce4\") " pod="openshift-controller-manager/controller-manager-8bb46d75d-dphhm" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.440681 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b5879bb6b-fhngf" event={"ID":"57c1e30a-fa85-4ab0-9e0c-d409f13f24d4","Type":"ContainerDied","Data":"5ab51404baf57057518a07b7f2d39756a2eea6a40871976cebb3cb8f669d8a56"} Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.440766 4812 scope.go:117] "RemoveContainer" containerID="700b33c0dce6243a18f1da5e9c721699c16582fef4ce16606cd27be683d39862" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.440886 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5dcfbd7fb6-8qqsj" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.441064 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8bb46d75d-dphhm" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.441342 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b5879bb6b-fhngf" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.465725 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85fb4db986-fbpt8" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.471954 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6b5879bb6b-fhngf"] Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.481671 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6b5879bb6b-fhngf"] Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.486881 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dcfbd7fb6-8qqsj"] Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.489972 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dcfbd7fb6-8qqsj"] Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.490192 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cxh5t" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.530235 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cxh5t" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.898511 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57c1e30a-fa85-4ab0-9e0c-d409f13f24d4" path="/var/lib/kubelet/pods/57c1e30a-fa85-4ab0-9e0c-d409f13f24d4/volumes" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.899576 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8a40075-5c56-4a05-90a9-f2740388eb58" path="/var/lib/kubelet/pods/f8a40075-5c56-4a05-90a9-f2740388eb58/volumes" Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.900581 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8bb46d75d-dphhm"] Feb 16 13:35:21 crc kubenswrapper[4812]: W0216 13:35:21.906510 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c0dea8c_4cf0_448f_8438_c32062604ce4.slice/crio-17c30c47dd83299c2aeb6a1672c990ddb09de123dfce9e0e1eede86486620154 WatchSource:0}: Error finding container 17c30c47dd83299c2aeb6a1672c990ddb09de123dfce9e0e1eede86486620154: Status 404 returned error can't find the container with id 17c30c47dd83299c2aeb6a1672c990ddb09de123dfce9e0e1eede86486620154 Feb 16 13:35:21 crc kubenswrapper[4812]: I0216 13:35:21.952138 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85fb4db986-fbpt8"] Feb 16 13:35:22 crc kubenswrapper[4812]: I0216 13:35:22.446674 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8bb46d75d-dphhm" event={"ID":"9c0dea8c-4cf0-448f-8438-c32062604ce4","Type":"ContainerStarted","Data":"17c30c47dd83299c2aeb6a1672c990ddb09de123dfce9e0e1eede86486620154"} Feb 16 13:35:22 crc kubenswrapper[4812]: I0216 13:35:22.447527 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85fb4db986-fbpt8" event={"ID":"898afeea-6bdb-425f-af8d-5397b1c0ce5f","Type":"ContainerStarted","Data":"f30bf6e794ee1178ca4e626a90acec250609b19685668a753a8076076b0b69cd"} Feb 16 13:35:23 crc kubenswrapper[4812]: I0216 13:35:23.274868 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cqhcl" Feb 16 13:35:23 crc kubenswrapper[4812]: I0216 13:35:23.280701 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cxh5t"] Feb 16 13:35:23 crc kubenswrapper[4812]: I0216 13:35:23.454633 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8bb46d75d-dphhm" event={"ID":"9c0dea8c-4cf0-448f-8438-c32062604ce4","Type":"ContainerStarted","Data":"ee9ec170bfdbce5ab17b3fb453276f69f375dcd633a99510fabf079ae001f4f9"} Feb 16 13:35:23 crc kubenswrapper[4812]: I0216 13:35:23.455994 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-8bb46d75d-dphhm" Feb 16 13:35:23 crc kubenswrapper[4812]: I0216 13:35:23.456977 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85fb4db986-fbpt8" event={"ID":"898afeea-6bdb-425f-af8d-5397b1c0ce5f","Type":"ContainerStarted","Data":"5414541643aa67ad8bda2cf263021f1e3165e41765e9788093bec52dbf3f2dda"} Feb 16 13:35:23 crc kubenswrapper[4812]: I0216 13:35:23.457171 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cxh5t" podUID="f4e6d69a-43ea-4b9b-a150-640b86bfbf42" containerName="registry-server" containerID="cri-o://08d35b6093da03d2b92182fa9bf61c1f2b525b664a60bc95ecdb179791eec7eb" gracePeriod=2 Feb 16 13:35:23 crc kubenswrapper[4812]: I0216 13:35:23.457462 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-85fb4db986-fbpt8" Feb 16 13:35:23 crc kubenswrapper[4812]: I0216 13:35:23.460116 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-8bb46d75d-dphhm" Feb 16 13:35:23 crc kubenswrapper[4812]: I0216 13:35:23.465644 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-85fb4db986-fbpt8" Feb 16 13:35:23 crc kubenswrapper[4812]: I0216 13:35:23.483289 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-8bb46d75d-dphhm" podStartSLOduration=4.483274285 podStartE2EDuration="4.483274285s" podCreationTimestamp="2026-02-16 13:35:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:35:23.480489803 +0000 UTC m=+212.544820534" watchObservedRunningTime="2026-02-16 13:35:23.483274285 +0000 UTC m=+212.547604986" Feb 16 13:35:23 crc kubenswrapper[4812]: I0216 13:35:23.524235 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-85fb4db986-fbpt8" podStartSLOduration=3.524219043 podStartE2EDuration="3.524219043s" podCreationTimestamp="2026-02-16 13:35:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:35:23.523412399 +0000 UTC m=+212.587743120" watchObservedRunningTime="2026-02-16 13:35:23.524219043 +0000 UTC m=+212.588549744" Feb 16 13:35:23 crc kubenswrapper[4812]: E0216 13:35:23.582859 4812 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4e6d69a_43ea_4b9b_a150_640b86bfbf42.slice/crio-conmon-08d35b6093da03d2b92182fa9bf61c1f2b525b664a60bc95ecdb179791eec7eb.scope\": RecentStats: unable to find data in memory cache]" Feb 16 13:35:23 crc kubenswrapper[4812]: I0216 13:35:23.645751 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lbfqw" Feb 16 13:35:23 crc kubenswrapper[4812]: I0216 13:35:23.898191 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cxh5t" Feb 16 13:35:23 crc kubenswrapper[4812]: I0216 13:35:23.969758 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4e6d69a-43ea-4b9b-a150-640b86bfbf42-catalog-content\") pod \"f4e6d69a-43ea-4b9b-a150-640b86bfbf42\" (UID: \"f4e6d69a-43ea-4b9b-a150-640b86bfbf42\") " Feb 16 13:35:23 crc kubenswrapper[4812]: I0216 13:35:23.970111 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4e6d69a-43ea-4b9b-a150-640b86bfbf42-utilities\") pod \"f4e6d69a-43ea-4b9b-a150-640b86bfbf42\" (UID: \"f4e6d69a-43ea-4b9b-a150-640b86bfbf42\") " Feb 16 13:35:23 crc kubenswrapper[4812]: I0216 13:35:23.970334 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j5jt\" (UniqueName: \"kubernetes.io/projected/f4e6d69a-43ea-4b9b-a150-640b86bfbf42-kube-api-access-5j5jt\") pod \"f4e6d69a-43ea-4b9b-a150-640b86bfbf42\" (UID: \"f4e6d69a-43ea-4b9b-a150-640b86bfbf42\") " Feb 16 13:35:23 crc kubenswrapper[4812]: I0216 13:35:23.970819 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4e6d69a-43ea-4b9b-a150-640b86bfbf42-utilities" (OuterVolumeSpecName: "utilities") pod "f4e6d69a-43ea-4b9b-a150-640b86bfbf42" (UID: "f4e6d69a-43ea-4b9b-a150-640b86bfbf42"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:35:23 crc kubenswrapper[4812]: I0216 13:35:23.976028 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4e6d69a-43ea-4b9b-a150-640b86bfbf42-kube-api-access-5j5jt" (OuterVolumeSpecName: "kube-api-access-5j5jt") pod "f4e6d69a-43ea-4b9b-a150-640b86bfbf42" (UID: "f4e6d69a-43ea-4b9b-a150-640b86bfbf42"). InnerVolumeSpecName "kube-api-access-5j5jt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:35:24 crc kubenswrapper[4812]: I0216 13:35:24.017852 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4e6d69a-43ea-4b9b-a150-640b86bfbf42-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4e6d69a-43ea-4b9b-a150-640b86bfbf42" (UID: "f4e6d69a-43ea-4b9b-a150-640b86bfbf42"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:35:24 crc kubenswrapper[4812]: I0216 13:35:24.071536 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4e6d69a-43ea-4b9b-a150-640b86bfbf42-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:24 crc kubenswrapper[4812]: I0216 13:35:24.071578 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4e6d69a-43ea-4b9b-a150-640b86bfbf42-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:24 crc kubenswrapper[4812]: I0216 13:35:24.071592 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5j5jt\" (UniqueName: \"kubernetes.io/projected/f4e6d69a-43ea-4b9b-a150-640b86bfbf42-kube-api-access-5j5jt\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:24 crc kubenswrapper[4812]: I0216 13:35:24.270097 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fjz4f" Feb 16 13:35:24 crc kubenswrapper[4812]: I0216 13:35:24.315678 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fjz4f" Feb 16 13:35:24 crc kubenswrapper[4812]: I0216 13:35:24.466823 4812 generic.go:334] "Generic (PLEG): container finished" podID="f4e6d69a-43ea-4b9b-a150-640b86bfbf42" containerID="08d35b6093da03d2b92182fa9bf61c1f2b525b664a60bc95ecdb179791eec7eb" exitCode=0 Feb 16 13:35:24 crc kubenswrapper[4812]: I0216 13:35:24.466885 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cxh5t" Feb 16 13:35:24 crc kubenswrapper[4812]: I0216 13:35:24.466928 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxh5t" event={"ID":"f4e6d69a-43ea-4b9b-a150-640b86bfbf42","Type":"ContainerDied","Data":"08d35b6093da03d2b92182fa9bf61c1f2b525b664a60bc95ecdb179791eec7eb"} Feb 16 13:35:24 crc kubenswrapper[4812]: I0216 13:35:24.466979 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxh5t" event={"ID":"f4e6d69a-43ea-4b9b-a150-640b86bfbf42","Type":"ContainerDied","Data":"12cd9cf36ff667dd43ba7267f2e24d663c34d0b0f199ef4cde2a7a9b5e8ecdf6"} Feb 16 13:35:24 crc kubenswrapper[4812]: I0216 13:35:24.467006 4812 scope.go:117] "RemoveContainer" containerID="08d35b6093da03d2b92182fa9bf61c1f2b525b664a60bc95ecdb179791eec7eb" Feb 16 13:35:24 crc kubenswrapper[4812]: I0216 13:35:24.484268 4812 scope.go:117] "RemoveContainer" containerID="380b01cd27500959fda3b9ff3f3a14731ad906b2f357aba0103177e16e77bc16" Feb 16 13:35:24 crc kubenswrapper[4812]: I0216 13:35:24.503218 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cxh5t"] Feb 16 13:35:24 crc kubenswrapper[4812]: I0216 13:35:24.506903 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cxh5t"] Feb 16 13:35:24 crc kubenswrapper[4812]: I0216 13:35:24.507056 4812 scope.go:117] "RemoveContainer" containerID="7dee5d7a7242e87467fc0dded2176f43137138a0fa79b07208d4c85bf8bfec68" Feb 16 13:35:24 crc kubenswrapper[4812]: I0216 13:35:24.522481 4812 scope.go:117] "RemoveContainer" containerID="08d35b6093da03d2b92182fa9bf61c1f2b525b664a60bc95ecdb179791eec7eb" Feb 16 13:35:24 crc kubenswrapper[4812]: E0216 13:35:24.523036 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08d35b6093da03d2b92182fa9bf61c1f2b525b664a60bc95ecdb179791eec7eb\": container with ID starting with 08d35b6093da03d2b92182fa9bf61c1f2b525b664a60bc95ecdb179791eec7eb not found: ID does not exist" containerID="08d35b6093da03d2b92182fa9bf61c1f2b525b664a60bc95ecdb179791eec7eb" Feb 16 13:35:24 crc kubenswrapper[4812]: I0216 13:35:24.523096 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08d35b6093da03d2b92182fa9bf61c1f2b525b664a60bc95ecdb179791eec7eb"} err="failed to get container status \"08d35b6093da03d2b92182fa9bf61c1f2b525b664a60bc95ecdb179791eec7eb\": rpc error: code = NotFound desc = could not find container \"08d35b6093da03d2b92182fa9bf61c1f2b525b664a60bc95ecdb179791eec7eb\": container with ID starting with 08d35b6093da03d2b92182fa9bf61c1f2b525b664a60bc95ecdb179791eec7eb not found: ID does not exist" Feb 16 13:35:24 crc kubenswrapper[4812]: I0216 13:35:24.523240 4812 scope.go:117] "RemoveContainer" containerID="380b01cd27500959fda3b9ff3f3a14731ad906b2f357aba0103177e16e77bc16" Feb 16 13:35:24 crc kubenswrapper[4812]: E0216 13:35:24.523737 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"380b01cd27500959fda3b9ff3f3a14731ad906b2f357aba0103177e16e77bc16\": container with ID starting with 380b01cd27500959fda3b9ff3f3a14731ad906b2f357aba0103177e16e77bc16 not found: ID does not exist" containerID="380b01cd27500959fda3b9ff3f3a14731ad906b2f357aba0103177e16e77bc16" Feb 16 13:35:24 crc kubenswrapper[4812]: I0216 13:35:24.523806 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"380b01cd27500959fda3b9ff3f3a14731ad906b2f357aba0103177e16e77bc16"} err="failed to get container status \"380b01cd27500959fda3b9ff3f3a14731ad906b2f357aba0103177e16e77bc16\": rpc error: code = NotFound desc = could not find container \"380b01cd27500959fda3b9ff3f3a14731ad906b2f357aba0103177e16e77bc16\": container with ID starting with 380b01cd27500959fda3b9ff3f3a14731ad906b2f357aba0103177e16e77bc16 not found: ID does not exist" Feb 16 13:35:24 crc kubenswrapper[4812]: I0216 13:35:24.523829 4812 scope.go:117] "RemoveContainer" containerID="7dee5d7a7242e87467fc0dded2176f43137138a0fa79b07208d4c85bf8bfec68" Feb 16 13:35:24 crc kubenswrapper[4812]: E0216 13:35:24.524379 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dee5d7a7242e87467fc0dded2176f43137138a0fa79b07208d4c85bf8bfec68\": container with ID starting with 7dee5d7a7242e87467fc0dded2176f43137138a0fa79b07208d4c85bf8bfec68 not found: ID does not exist" containerID="7dee5d7a7242e87467fc0dded2176f43137138a0fa79b07208d4c85bf8bfec68" Feb 16 13:35:24 crc kubenswrapper[4812]: I0216 13:35:24.524503 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dee5d7a7242e87467fc0dded2176f43137138a0fa79b07208d4c85bf8bfec68"} err="failed to get container status \"7dee5d7a7242e87467fc0dded2176f43137138a0fa79b07208d4c85bf8bfec68\": rpc error: code = NotFound desc = could not find container \"7dee5d7a7242e87467fc0dded2176f43137138a0fa79b07208d4c85bf8bfec68\": container with ID starting with 7dee5d7a7242e87467fc0dded2176f43137138a0fa79b07208d4c85bf8bfec68 not found: ID does not exist" Feb 16 13:35:25 crc kubenswrapper[4812]: I0216 13:35:25.679051 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lbfqw"] Feb 16 13:35:25 crc kubenswrapper[4812]: I0216 13:35:25.679482 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lbfqw" podUID="1b3c17cd-2607-4379-9ff0-5ad26cfca6ce" containerName="registry-server" containerID="cri-o://2a47df43b56b2bf43f35492ef36b24c2fc2d53e5430fc405d26bd8c7d811d475" gracePeriod=2 Feb 16 13:35:25 crc kubenswrapper[4812]: I0216 13:35:25.898346 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4e6d69a-43ea-4b9b-a150-640b86bfbf42" path="/var/lib/kubelet/pods/f4e6d69a-43ea-4b9b-a150-640b86bfbf42/volumes" Feb 16 13:35:26 crc kubenswrapper[4812]: I0216 13:35:26.210169 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lbfqw" Feb 16 13:35:26 crc kubenswrapper[4812]: I0216 13:35:26.302474 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b3c17cd-2607-4379-9ff0-5ad26cfca6ce-utilities\") pod \"1b3c17cd-2607-4379-9ff0-5ad26cfca6ce\" (UID: \"1b3c17cd-2607-4379-9ff0-5ad26cfca6ce\") " Feb 16 13:35:26 crc kubenswrapper[4812]: I0216 13:35:26.302555 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b3c17cd-2607-4379-9ff0-5ad26cfca6ce-catalog-content\") pod \"1b3c17cd-2607-4379-9ff0-5ad26cfca6ce\" (UID: \"1b3c17cd-2607-4379-9ff0-5ad26cfca6ce\") " Feb 16 13:35:26 crc kubenswrapper[4812]: I0216 13:35:26.302671 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87fbb\" (UniqueName: \"kubernetes.io/projected/1b3c17cd-2607-4379-9ff0-5ad26cfca6ce-kube-api-access-87fbb\") pod \"1b3c17cd-2607-4379-9ff0-5ad26cfca6ce\" (UID: \"1b3c17cd-2607-4379-9ff0-5ad26cfca6ce\") " Feb 16 13:35:26 crc kubenswrapper[4812]: I0216 13:35:26.303600 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b3c17cd-2607-4379-9ff0-5ad26cfca6ce-utilities" (OuterVolumeSpecName: "utilities") pod "1b3c17cd-2607-4379-9ff0-5ad26cfca6ce" (UID: "1b3c17cd-2607-4379-9ff0-5ad26cfca6ce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:35:26 crc kubenswrapper[4812]: I0216 13:35:26.308798 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b3c17cd-2607-4379-9ff0-5ad26cfca6ce-kube-api-access-87fbb" (OuterVolumeSpecName: "kube-api-access-87fbb") pod "1b3c17cd-2607-4379-9ff0-5ad26cfca6ce" (UID: "1b3c17cd-2607-4379-9ff0-5ad26cfca6ce"). InnerVolumeSpecName "kube-api-access-87fbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:35:26 crc kubenswrapper[4812]: I0216 13:35:26.325014 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b3c17cd-2607-4379-9ff0-5ad26cfca6ce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b3c17cd-2607-4379-9ff0-5ad26cfca6ce" (UID: "1b3c17cd-2607-4379-9ff0-5ad26cfca6ce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:35:26 crc kubenswrapper[4812]: I0216 13:35:26.403865 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b3c17cd-2607-4379-9ff0-5ad26cfca6ce-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:26 crc kubenswrapper[4812]: I0216 13:35:26.403900 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b3c17cd-2607-4379-9ff0-5ad26cfca6ce-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:26 crc kubenswrapper[4812]: I0216 13:35:26.403912 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87fbb\" (UniqueName: \"kubernetes.io/projected/1b3c17cd-2607-4379-9ff0-5ad26cfca6ce-kube-api-access-87fbb\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:26 crc kubenswrapper[4812]: I0216 13:35:26.482250 4812 generic.go:334] "Generic (PLEG): container finished" podID="1b3c17cd-2607-4379-9ff0-5ad26cfca6ce" containerID="2a47df43b56b2bf43f35492ef36b24c2fc2d53e5430fc405d26bd8c7d811d475" exitCode=0 Feb 16 13:35:26 crc kubenswrapper[4812]: I0216 13:35:26.482330 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lbfqw" Feb 16 13:35:26 crc kubenswrapper[4812]: I0216 13:35:26.482309 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbfqw" event={"ID":"1b3c17cd-2607-4379-9ff0-5ad26cfca6ce","Type":"ContainerDied","Data":"2a47df43b56b2bf43f35492ef36b24c2fc2d53e5430fc405d26bd8c7d811d475"} Feb 16 13:35:26 crc kubenswrapper[4812]: I0216 13:35:26.482490 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbfqw" event={"ID":"1b3c17cd-2607-4379-9ff0-5ad26cfca6ce","Type":"ContainerDied","Data":"0855fd018bb4991cc65be52eb6ccf2b1ac9122095be85de7e28db9d6e46a1c0b"} Feb 16 13:35:26 crc kubenswrapper[4812]: I0216 13:35:26.482507 4812 scope.go:117] "RemoveContainer" containerID="2a47df43b56b2bf43f35492ef36b24c2fc2d53e5430fc405d26bd8c7d811d475" Feb 16 13:35:26 crc kubenswrapper[4812]: I0216 13:35:26.498131 4812 scope.go:117] "RemoveContainer" containerID="1b9fd1189389c3723f97e0dc52f73f122778955e8a82c8c51ee1ac2df6466f5d" Feb 16 13:35:26 crc kubenswrapper[4812]: I0216 13:35:26.512248 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lbfqw"] Feb 16 13:35:26 crc kubenswrapper[4812]: I0216 13:35:26.514716 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lbfqw"] Feb 16 13:35:26 crc kubenswrapper[4812]: I0216 13:35:26.522729 4812 scope.go:117] "RemoveContainer" containerID="196128f078b5bd69c93040bb65ba1008aef7a641f66a262c292e0c88ccbcc77f" Feb 16 13:35:26 crc kubenswrapper[4812]: I0216 13:35:26.533641 4812 scope.go:117] "RemoveContainer" containerID="2a47df43b56b2bf43f35492ef36b24c2fc2d53e5430fc405d26bd8c7d811d475" Feb 16 13:35:26 crc kubenswrapper[4812]: E0216 13:35:26.533887 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a47df43b56b2bf43f35492ef36b24c2fc2d53e5430fc405d26bd8c7d811d475\": container with ID starting with 2a47df43b56b2bf43f35492ef36b24c2fc2d53e5430fc405d26bd8c7d811d475 not found: ID does not exist" containerID="2a47df43b56b2bf43f35492ef36b24c2fc2d53e5430fc405d26bd8c7d811d475" Feb 16 13:35:26 crc kubenswrapper[4812]: I0216 13:35:26.533916 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a47df43b56b2bf43f35492ef36b24c2fc2d53e5430fc405d26bd8c7d811d475"} err="failed to get container status \"2a47df43b56b2bf43f35492ef36b24c2fc2d53e5430fc405d26bd8c7d811d475\": rpc error: code = NotFound desc = could not find container \"2a47df43b56b2bf43f35492ef36b24c2fc2d53e5430fc405d26bd8c7d811d475\": container with ID starting with 2a47df43b56b2bf43f35492ef36b24c2fc2d53e5430fc405d26bd8c7d811d475 not found: ID does not exist" Feb 16 13:35:26 crc kubenswrapper[4812]: I0216 13:35:26.533947 4812 scope.go:117] "RemoveContainer" containerID="1b9fd1189389c3723f97e0dc52f73f122778955e8a82c8c51ee1ac2df6466f5d" Feb 16 13:35:26 crc kubenswrapper[4812]: E0216 13:35:26.534283 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b9fd1189389c3723f97e0dc52f73f122778955e8a82c8c51ee1ac2df6466f5d\": container with ID starting with 1b9fd1189389c3723f97e0dc52f73f122778955e8a82c8c51ee1ac2df6466f5d not found: ID does not exist" containerID="1b9fd1189389c3723f97e0dc52f73f122778955e8a82c8c51ee1ac2df6466f5d" Feb 16 13:35:26 crc kubenswrapper[4812]: I0216 13:35:26.534325 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b9fd1189389c3723f97e0dc52f73f122778955e8a82c8c51ee1ac2df6466f5d"} err="failed to get container status \"1b9fd1189389c3723f97e0dc52f73f122778955e8a82c8c51ee1ac2df6466f5d\": rpc error: code = NotFound desc = could not find container \"1b9fd1189389c3723f97e0dc52f73f122778955e8a82c8c51ee1ac2df6466f5d\": container with ID starting with 1b9fd1189389c3723f97e0dc52f73f122778955e8a82c8c51ee1ac2df6466f5d not found: ID does not exist" Feb 16 13:35:26 crc kubenswrapper[4812]: I0216 13:35:26.534357 4812 scope.go:117] "RemoveContainer" containerID="196128f078b5bd69c93040bb65ba1008aef7a641f66a262c292e0c88ccbcc77f" Feb 16 13:35:26 crc kubenswrapper[4812]: E0216 13:35:26.534789 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"196128f078b5bd69c93040bb65ba1008aef7a641f66a262c292e0c88ccbcc77f\": container with ID starting with 196128f078b5bd69c93040bb65ba1008aef7a641f66a262c292e0c88ccbcc77f not found: ID does not exist" containerID="196128f078b5bd69c93040bb65ba1008aef7a641f66a262c292e0c88ccbcc77f" Feb 16 13:35:26 crc kubenswrapper[4812]: I0216 13:35:26.534813 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"196128f078b5bd69c93040bb65ba1008aef7a641f66a262c292e0c88ccbcc77f"} err="failed to get container status \"196128f078b5bd69c93040bb65ba1008aef7a641f66a262c292e0c88ccbcc77f\": rpc error: code = NotFound desc = could not find container \"196128f078b5bd69c93040bb65ba1008aef7a641f66a262c292e0c88ccbcc77f\": container with ID starting with 196128f078b5bd69c93040bb65ba1008aef7a641f66a262c292e0c88ccbcc77f not found: ID does not exist" Feb 16 13:35:27 crc kubenswrapper[4812]: I0216 13:35:27.887069 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b3c17cd-2607-4379-9ff0-5ad26cfca6ce" path="/var/lib/kubelet/pods/1b3c17cd-2607-4379-9ff0-5ad26cfca6ce/volumes" Feb 16 13:35:29 crc kubenswrapper[4812]: I0216 13:35:29.442467 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" podUID="5245eea2-0039-4127-bd35-5d4ab5204b62" containerName="oauth-openshift" containerID="cri-o://bb8177475cd3dedf523712b692ebc348f761ec602559ca31f6fb974cc5887146" gracePeriod=15 Feb 16 13:35:29 crc kubenswrapper[4812]: I0216 13:35:29.876229 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:35:29 crc kubenswrapper[4812]: I0216 13:35:29.943749 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-user-template-login\") pod \"5245eea2-0039-4127-bd35-5d4ab5204b62\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " Feb 16 13:35:29 crc kubenswrapper[4812]: I0216 13:35:29.943805 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5245eea2-0039-4127-bd35-5d4ab5204b62-audit-policies\") pod \"5245eea2-0039-4127-bd35-5d4ab5204b62\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " Feb 16 13:35:29 crc kubenswrapper[4812]: I0216 13:35:29.943842 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzhd7\" (UniqueName: \"kubernetes.io/projected/5245eea2-0039-4127-bd35-5d4ab5204b62-kube-api-access-mzhd7\") pod \"5245eea2-0039-4127-bd35-5d4ab5204b62\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " Feb 16 13:35:29 crc kubenswrapper[4812]: I0216 13:35:29.943885 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-system-trusted-ca-bundle\") pod \"5245eea2-0039-4127-bd35-5d4ab5204b62\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " Feb 16 13:35:29 crc kubenswrapper[4812]: I0216 13:35:29.943915 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-user-idp-0-file-data\") pod \"5245eea2-0039-4127-bd35-5d4ab5204b62\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " Feb 16 13:35:29 crc kubenswrapper[4812]: I0216 13:35:29.943944 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-system-router-certs\") pod \"5245eea2-0039-4127-bd35-5d4ab5204b62\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " Feb 16 13:35:29 crc kubenswrapper[4812]: I0216 13:35:29.943981 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5245eea2-0039-4127-bd35-5d4ab5204b62-audit-dir\") pod \"5245eea2-0039-4127-bd35-5d4ab5204b62\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " Feb 16 13:35:29 crc kubenswrapper[4812]: I0216 13:35:29.944011 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-system-ocp-branding-template\") pod \"5245eea2-0039-4127-bd35-5d4ab5204b62\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " Feb 16 13:35:29 crc kubenswrapper[4812]: I0216 13:35:29.944054 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-system-cliconfig\") pod \"5245eea2-0039-4127-bd35-5d4ab5204b62\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " Feb 16 13:35:29 crc kubenswrapper[4812]: I0216 13:35:29.944077 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-user-template-provider-selection\") pod \"5245eea2-0039-4127-bd35-5d4ab5204b62\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " Feb 16 13:35:29 crc kubenswrapper[4812]: I0216 13:35:29.944117 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-system-session\") pod \"5245eea2-0039-4127-bd35-5d4ab5204b62\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " Feb 16 13:35:29 crc kubenswrapper[4812]: I0216 13:35:29.944160 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-system-serving-cert\") pod \"5245eea2-0039-4127-bd35-5d4ab5204b62\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " Feb 16 13:35:29 crc kubenswrapper[4812]: I0216 13:35:29.944153 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5245eea2-0039-4127-bd35-5d4ab5204b62-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "5245eea2-0039-4127-bd35-5d4ab5204b62" (UID: "5245eea2-0039-4127-bd35-5d4ab5204b62"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:35:29 crc kubenswrapper[4812]: I0216 13:35:29.944216 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-user-template-error\") pod \"5245eea2-0039-4127-bd35-5d4ab5204b62\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " Feb 16 13:35:29 crc kubenswrapper[4812]: I0216 13:35:29.944248 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-system-service-ca\") pod \"5245eea2-0039-4127-bd35-5d4ab5204b62\" (UID: \"5245eea2-0039-4127-bd35-5d4ab5204b62\") " Feb 16 13:35:29 crc kubenswrapper[4812]: I0216 13:35:29.944506 4812 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5245eea2-0039-4127-bd35-5d4ab5204b62-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:29 crc kubenswrapper[4812]: I0216 13:35:29.944823 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "5245eea2-0039-4127-bd35-5d4ab5204b62" (UID: "5245eea2-0039-4127-bd35-5d4ab5204b62"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:35:29 crc kubenswrapper[4812]: I0216 13:35:29.945263 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "5245eea2-0039-4127-bd35-5d4ab5204b62" (UID: "5245eea2-0039-4127-bd35-5d4ab5204b62"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:35:29 crc kubenswrapper[4812]: I0216 13:35:29.945512 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5245eea2-0039-4127-bd35-5d4ab5204b62-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "5245eea2-0039-4127-bd35-5d4ab5204b62" (UID: "5245eea2-0039-4127-bd35-5d4ab5204b62"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:35:29 crc kubenswrapper[4812]: I0216 13:35:29.946168 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "5245eea2-0039-4127-bd35-5d4ab5204b62" (UID: "5245eea2-0039-4127-bd35-5d4ab5204b62"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:35:29 crc kubenswrapper[4812]: I0216 13:35:29.949598 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "5245eea2-0039-4127-bd35-5d4ab5204b62" (UID: "5245eea2-0039-4127-bd35-5d4ab5204b62"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:35:29 crc kubenswrapper[4812]: I0216 13:35:29.949640 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5245eea2-0039-4127-bd35-5d4ab5204b62-kube-api-access-mzhd7" (OuterVolumeSpecName: "kube-api-access-mzhd7") pod "5245eea2-0039-4127-bd35-5d4ab5204b62" (UID: "5245eea2-0039-4127-bd35-5d4ab5204b62"). InnerVolumeSpecName "kube-api-access-mzhd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:35:29 crc kubenswrapper[4812]: I0216 13:35:29.950681 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "5245eea2-0039-4127-bd35-5d4ab5204b62" (UID: "5245eea2-0039-4127-bd35-5d4ab5204b62"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:35:29 crc kubenswrapper[4812]: I0216 13:35:29.951171 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "5245eea2-0039-4127-bd35-5d4ab5204b62" (UID: "5245eea2-0039-4127-bd35-5d4ab5204b62"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:35:29 crc kubenswrapper[4812]: I0216 13:35:29.951609 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "5245eea2-0039-4127-bd35-5d4ab5204b62" (UID: "5245eea2-0039-4127-bd35-5d4ab5204b62"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:35:29 crc kubenswrapper[4812]: I0216 13:35:29.951889 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "5245eea2-0039-4127-bd35-5d4ab5204b62" (UID: "5245eea2-0039-4127-bd35-5d4ab5204b62"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:35:29 crc kubenswrapper[4812]: I0216 13:35:29.952410 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "5245eea2-0039-4127-bd35-5d4ab5204b62" (UID: "5245eea2-0039-4127-bd35-5d4ab5204b62"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:35:29 crc kubenswrapper[4812]: I0216 13:35:29.957779 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "5245eea2-0039-4127-bd35-5d4ab5204b62" (UID: "5245eea2-0039-4127-bd35-5d4ab5204b62"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:35:29 crc kubenswrapper[4812]: I0216 13:35:29.958144 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "5245eea2-0039-4127-bd35-5d4ab5204b62" (UID: "5245eea2-0039-4127-bd35-5d4ab5204b62"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:35:30 crc kubenswrapper[4812]: I0216 13:35:30.045467 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:30 crc kubenswrapper[4812]: I0216 13:35:30.045520 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:30 crc kubenswrapper[4812]: I0216 13:35:30.045542 4812 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5245eea2-0039-4127-bd35-5d4ab5204b62-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:30 crc kubenswrapper[4812]: I0216 13:35:30.045563 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:30 crc kubenswrapper[4812]: I0216 13:35:30.045582 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzhd7\" (UniqueName: \"kubernetes.io/projected/5245eea2-0039-4127-bd35-5d4ab5204b62-kube-api-access-mzhd7\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:30 crc kubenswrapper[4812]: I0216 13:35:30.045601 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:30 crc kubenswrapper[4812]: I0216 13:35:30.045620 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:30 crc kubenswrapper[4812]: I0216 13:35:30.045637 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:30 crc kubenswrapper[4812]: I0216 13:35:30.045656 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:30 crc kubenswrapper[4812]: I0216 13:35:30.045673 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:30 crc kubenswrapper[4812]: I0216 13:35:30.045691 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:30 crc kubenswrapper[4812]: I0216 13:35:30.045711 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:30 crc kubenswrapper[4812]: I0216 13:35:30.045729 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5245eea2-0039-4127-bd35-5d4ab5204b62-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:30 crc kubenswrapper[4812]: I0216 13:35:30.502589 4812 generic.go:334] "Generic (PLEG): container finished" podID="5245eea2-0039-4127-bd35-5d4ab5204b62" containerID="bb8177475cd3dedf523712b692ebc348f761ec602559ca31f6fb974cc5887146" exitCode=0 Feb 16 13:35:30 crc kubenswrapper[4812]: I0216 13:35:30.502662 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" Feb 16 13:35:30 crc kubenswrapper[4812]: I0216 13:35:30.502650 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" event={"ID":"5245eea2-0039-4127-bd35-5d4ab5204b62","Type":"ContainerDied","Data":"bb8177475cd3dedf523712b692ebc348f761ec602559ca31f6fb974cc5887146"} Feb 16 13:35:30 crc kubenswrapper[4812]: I0216 13:35:30.503390 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4mg2p" event={"ID":"5245eea2-0039-4127-bd35-5d4ab5204b62","Type":"ContainerDied","Data":"c3260a47512d7c85f003ed16b14f1f86afe94056c98c9c3ea0a8db4ffa5b52a5"} Feb 16 13:35:30 crc kubenswrapper[4812]: I0216 13:35:30.503415 4812 scope.go:117] "RemoveContainer" containerID="bb8177475cd3dedf523712b692ebc348f761ec602559ca31f6fb974cc5887146" Feb 16 13:35:30 crc kubenswrapper[4812]: I0216 13:35:30.520010 4812 scope.go:117] "RemoveContainer" containerID="bb8177475cd3dedf523712b692ebc348f761ec602559ca31f6fb974cc5887146" Feb 16 13:35:30 crc kubenswrapper[4812]: E0216 13:35:30.520567 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb8177475cd3dedf523712b692ebc348f761ec602559ca31f6fb974cc5887146\": container with ID starting with bb8177475cd3dedf523712b692ebc348f761ec602559ca31f6fb974cc5887146 not found: ID does not exist" containerID="bb8177475cd3dedf523712b692ebc348f761ec602559ca31f6fb974cc5887146" Feb 16 13:35:30 crc kubenswrapper[4812]: I0216 13:35:30.520612 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb8177475cd3dedf523712b692ebc348f761ec602559ca31f6fb974cc5887146"} err="failed to get container status \"bb8177475cd3dedf523712b692ebc348f761ec602559ca31f6fb974cc5887146\": rpc error: code = NotFound desc = could not find container \"bb8177475cd3dedf523712b692ebc348f761ec602559ca31f6fb974cc5887146\": container with ID starting with bb8177475cd3dedf523712b692ebc348f761ec602559ca31f6fb974cc5887146 not found: ID does not exist" Feb 16 13:35:30 crc kubenswrapper[4812]: I0216 13:35:30.531235 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4mg2p"] Feb 16 13:35:30 crc kubenswrapper[4812]: I0216 13:35:30.535997 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4mg2p"] Feb 16 13:35:31 crc kubenswrapper[4812]: I0216 13:35:31.896799 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5245eea2-0039-4127-bd35-5d4ab5204b62" path="/var/lib/kubelet/pods/5245eea2-0039-4127-bd35-5d4ab5204b62/volumes" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.643475 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2"] Feb 16 13:35:39 crc kubenswrapper[4812]: E0216 13:35:39.644279 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4e6d69a-43ea-4b9b-a150-640b86bfbf42" containerName="extract-utilities" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.644291 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4e6d69a-43ea-4b9b-a150-640b86bfbf42" containerName="extract-utilities" Feb 16 13:35:39 crc kubenswrapper[4812]: E0216 13:35:39.644302 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b3c17cd-2607-4379-9ff0-5ad26cfca6ce" containerName="extract-content" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.644308 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b3c17cd-2607-4379-9ff0-5ad26cfca6ce" containerName="extract-content" Feb 16 13:35:39 crc kubenswrapper[4812]: E0216 13:35:39.644318 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b3c17cd-2607-4379-9ff0-5ad26cfca6ce" containerName="registry-server" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.644325 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b3c17cd-2607-4379-9ff0-5ad26cfca6ce" containerName="registry-server" Feb 16 13:35:39 crc kubenswrapper[4812]: E0216 13:35:39.644337 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4e6d69a-43ea-4b9b-a150-640b86bfbf42" containerName="registry-server" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.644343 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4e6d69a-43ea-4b9b-a150-640b86bfbf42" containerName="registry-server" Feb 16 13:35:39 crc kubenswrapper[4812]: E0216 13:35:39.644351 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b3c17cd-2607-4379-9ff0-5ad26cfca6ce" containerName="extract-utilities" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.644357 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b3c17cd-2607-4379-9ff0-5ad26cfca6ce" containerName="extract-utilities" Feb 16 13:35:39 crc kubenswrapper[4812]: E0216 13:35:39.644365 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4e6d69a-43ea-4b9b-a150-640b86bfbf42" containerName="extract-content" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.644372 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4e6d69a-43ea-4b9b-a150-640b86bfbf42" containerName="extract-content" Feb 16 13:35:39 crc kubenswrapper[4812]: E0216 13:35:39.644399 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5245eea2-0039-4127-bd35-5d4ab5204b62" containerName="oauth-openshift" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.644405 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="5245eea2-0039-4127-bd35-5d4ab5204b62" containerName="oauth-openshift" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.644515 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="5245eea2-0039-4127-bd35-5d4ab5204b62" containerName="oauth-openshift" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.644528 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b3c17cd-2607-4379-9ff0-5ad26cfca6ce" containerName="registry-server" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.644535 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4e6d69a-43ea-4b9b-a150-640b86bfbf42" containerName="registry-server" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.644939 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.648061 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.648253 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.648764 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.648958 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.649374 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.649597 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.649904 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.650502 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.650814 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.652717 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.653603 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.655961 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.662343 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2"] Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.662706 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.668859 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.673344 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.819938 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4336de6c-2a59-469b-8a6d-c97c74e127b0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6c8567d5c5-sdnc2\" (UID: \"4336de6c-2a59-469b-8a6d-c97c74e127b0\") " pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.819997 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4336de6c-2a59-469b-8a6d-c97c74e127b0-v4-0-config-system-service-ca\") pod \"oauth-openshift-6c8567d5c5-sdnc2\" (UID: \"4336de6c-2a59-469b-8a6d-c97c74e127b0\") " pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.820024 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4336de6c-2a59-469b-8a6d-c97c74e127b0-audit-policies\") pod \"oauth-openshift-6c8567d5c5-sdnc2\" (UID: \"4336de6c-2a59-469b-8a6d-c97c74e127b0\") " pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.820075 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4336de6c-2a59-469b-8a6d-c97c74e127b0-v4-0-config-system-router-certs\") pod \"oauth-openshift-6c8567d5c5-sdnc2\" (UID: \"4336de6c-2a59-469b-8a6d-c97c74e127b0\") " pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.820205 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4336de6c-2a59-469b-8a6d-c97c74e127b0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6c8567d5c5-sdnc2\" (UID: \"4336de6c-2a59-469b-8a6d-c97c74e127b0\") " pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.820558 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4336de6c-2a59-469b-8a6d-c97c74e127b0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6c8567d5c5-sdnc2\" (UID: \"4336de6c-2a59-469b-8a6d-c97c74e127b0\") " pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.820628 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4336de6c-2a59-469b-8a6d-c97c74e127b0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6c8567d5c5-sdnc2\" (UID: \"4336de6c-2a59-469b-8a6d-c97c74e127b0\") " pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.820792 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4336de6c-2a59-469b-8a6d-c97c74e127b0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6c8567d5c5-sdnc2\" (UID: \"4336de6c-2a59-469b-8a6d-c97c74e127b0\") " pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.820855 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4336de6c-2a59-469b-8a6d-c97c74e127b0-audit-dir\") pod \"oauth-openshift-6c8567d5c5-sdnc2\" (UID: \"4336de6c-2a59-469b-8a6d-c97c74e127b0\") " pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.820964 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4336de6c-2a59-469b-8a6d-c97c74e127b0-v4-0-config-system-session\") pod \"oauth-openshift-6c8567d5c5-sdnc2\" (UID: \"4336de6c-2a59-469b-8a6d-c97c74e127b0\") " pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.821015 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4336de6c-2a59-469b-8a6d-c97c74e127b0-v4-0-config-user-template-login\") pod \"oauth-openshift-6c8567d5c5-sdnc2\" (UID: \"4336de6c-2a59-469b-8a6d-c97c74e127b0\") " pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.821046 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4336de6c-2a59-469b-8a6d-c97c74e127b0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6c8567d5c5-sdnc2\" (UID: \"4336de6c-2a59-469b-8a6d-c97c74e127b0\") " pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.821152 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4336de6c-2a59-469b-8a6d-c97c74e127b0-v4-0-config-user-template-error\") pod \"oauth-openshift-6c8567d5c5-sdnc2\" (UID: \"4336de6c-2a59-469b-8a6d-c97c74e127b0\") " pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.821207 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg9nb\" (UniqueName: \"kubernetes.io/projected/4336de6c-2a59-469b-8a6d-c97c74e127b0-kube-api-access-tg9nb\") pod \"oauth-openshift-6c8567d5c5-sdnc2\" (UID: \"4336de6c-2a59-469b-8a6d-c97c74e127b0\") " pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.922679 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4336de6c-2a59-469b-8a6d-c97c74e127b0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6c8567d5c5-sdnc2\" (UID: \"4336de6c-2a59-469b-8a6d-c97c74e127b0\") " pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.922754 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4336de6c-2a59-469b-8a6d-c97c74e127b0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6c8567d5c5-sdnc2\" (UID: \"4336de6c-2a59-469b-8a6d-c97c74e127b0\") " pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.922818 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4336de6c-2a59-469b-8a6d-c97c74e127b0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6c8567d5c5-sdnc2\" (UID: \"4336de6c-2a59-469b-8a6d-c97c74e127b0\") " pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.922855 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4336de6c-2a59-469b-8a6d-c97c74e127b0-audit-dir\") pod \"oauth-openshift-6c8567d5c5-sdnc2\" (UID: \"4336de6c-2a59-469b-8a6d-c97c74e127b0\") " pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.922881 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4336de6c-2a59-469b-8a6d-c97c74e127b0-v4-0-config-system-session\") pod \"oauth-openshift-6c8567d5c5-sdnc2\" (UID: \"4336de6c-2a59-469b-8a6d-c97c74e127b0\") " pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.922902 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4336de6c-2a59-469b-8a6d-c97c74e127b0-v4-0-config-user-template-login\") pod \"oauth-openshift-6c8567d5c5-sdnc2\" (UID: \"4336de6c-2a59-469b-8a6d-c97c74e127b0\") " pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.922927 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4336de6c-2a59-469b-8a6d-c97c74e127b0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6c8567d5c5-sdnc2\" (UID: \"4336de6c-2a59-469b-8a6d-c97c74e127b0\") " pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.922977 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4336de6c-2a59-469b-8a6d-c97c74e127b0-v4-0-config-user-template-error\") pod \"oauth-openshift-6c8567d5c5-sdnc2\" (UID: \"4336de6c-2a59-469b-8a6d-c97c74e127b0\") " pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.923011 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg9nb\" (UniqueName: \"kubernetes.io/projected/4336de6c-2a59-469b-8a6d-c97c74e127b0-kube-api-access-tg9nb\") pod \"oauth-openshift-6c8567d5c5-sdnc2\" (UID: \"4336de6c-2a59-469b-8a6d-c97c74e127b0\") " pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.923046 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4336de6c-2a59-469b-8a6d-c97c74e127b0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6c8567d5c5-sdnc2\" (UID: \"4336de6c-2a59-469b-8a6d-c97c74e127b0\") " pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.923070 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4336de6c-2a59-469b-8a6d-c97c74e127b0-v4-0-config-system-service-ca\") pod \"oauth-openshift-6c8567d5c5-sdnc2\" (UID: \"4336de6c-2a59-469b-8a6d-c97c74e127b0\") " pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.923089 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4336de6c-2a59-469b-8a6d-c97c74e127b0-audit-policies\") pod \"oauth-openshift-6c8567d5c5-sdnc2\" (UID: \"4336de6c-2a59-469b-8a6d-c97c74e127b0\") " pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.923115 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4336de6c-2a59-469b-8a6d-c97c74e127b0-v4-0-config-system-router-certs\") pod \"oauth-openshift-6c8567d5c5-sdnc2\" (UID: \"4336de6c-2a59-469b-8a6d-c97c74e127b0\") " pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.923137 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4336de6c-2a59-469b-8a6d-c97c74e127b0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6c8567d5c5-sdnc2\" (UID: \"4336de6c-2a59-469b-8a6d-c97c74e127b0\") " pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.924043 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4336de6c-2a59-469b-8a6d-c97c74e127b0-audit-policies\") pod \"oauth-openshift-6c8567d5c5-sdnc2\" (UID: \"4336de6c-2a59-469b-8a6d-c97c74e127b0\") " pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.924227 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4336de6c-2a59-469b-8a6d-c97c74e127b0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6c8567d5c5-sdnc2\" (UID: \"4336de6c-2a59-469b-8a6d-c97c74e127b0\") " pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.924600 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4336de6c-2a59-469b-8a6d-c97c74e127b0-v4-0-config-system-service-ca\") pod \"oauth-openshift-6c8567d5c5-sdnc2\" (UID: \"4336de6c-2a59-469b-8a6d-c97c74e127b0\") " pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.924661 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4336de6c-2a59-469b-8a6d-c97c74e127b0-audit-dir\") pod \"oauth-openshift-6c8567d5c5-sdnc2\" (UID: \"4336de6c-2a59-469b-8a6d-c97c74e127b0\") " pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.924797 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4336de6c-2a59-469b-8a6d-c97c74e127b0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6c8567d5c5-sdnc2\" (UID: \"4336de6c-2a59-469b-8a6d-c97c74e127b0\") " pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.930301 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4336de6c-2a59-469b-8a6d-c97c74e127b0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6c8567d5c5-sdnc2\" (UID: \"4336de6c-2a59-469b-8a6d-c97c74e127b0\") " pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.931169 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4336de6c-2a59-469b-8a6d-c97c74e127b0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6c8567d5c5-sdnc2\" (UID: \"4336de6c-2a59-469b-8a6d-c97c74e127b0\") " pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.931538 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4336de6c-2a59-469b-8a6d-c97c74e127b0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6c8567d5c5-sdnc2\" (UID: \"4336de6c-2a59-469b-8a6d-c97c74e127b0\") " pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.931764 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4336de6c-2a59-469b-8a6d-c97c74e127b0-v4-0-config-user-template-login\") pod \"oauth-openshift-6c8567d5c5-sdnc2\" (UID: \"4336de6c-2a59-469b-8a6d-c97c74e127b0\") " pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.933058 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4336de6c-2a59-469b-8a6d-c97c74e127b0-v4-0-config-system-router-certs\") pod \"oauth-openshift-6c8567d5c5-sdnc2\" (UID: \"4336de6c-2a59-469b-8a6d-c97c74e127b0\") " pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.936852 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4336de6c-2a59-469b-8a6d-c97c74e127b0-v4-0-config-system-session\") pod \"oauth-openshift-6c8567d5c5-sdnc2\" (UID: \"4336de6c-2a59-469b-8a6d-c97c74e127b0\") " pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.938037 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4336de6c-2a59-469b-8a6d-c97c74e127b0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6c8567d5c5-sdnc2\" (UID: \"4336de6c-2a59-469b-8a6d-c97c74e127b0\") " pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.938706 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4336de6c-2a59-469b-8a6d-c97c74e127b0-v4-0-config-user-template-error\") pod \"oauth-openshift-6c8567d5c5-sdnc2\" (UID: \"4336de6c-2a59-469b-8a6d-c97c74e127b0\") " pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.948186 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg9nb\" (UniqueName: \"kubernetes.io/projected/4336de6c-2a59-469b-8a6d-c97c74e127b0-kube-api-access-tg9nb\") pod \"oauth-openshift-6c8567d5c5-sdnc2\" (UID: \"4336de6c-2a59-469b-8a6d-c97c74e127b0\") " pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.959659 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.982112 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8bb46d75d-dphhm"] Feb 16 13:35:39 crc kubenswrapper[4812]: I0216 13:35:39.982349 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-8bb46d75d-dphhm" podUID="9c0dea8c-4cf0-448f-8438-c32062604ce4" containerName="controller-manager" containerID="cri-o://ee9ec170bfdbce5ab17b3fb453276f69f375dcd633a99510fabf079ae001f4f9" gracePeriod=30 Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.064829 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85fb4db986-fbpt8"] Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.065434 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-85fb4db986-fbpt8" podUID="898afeea-6bdb-425f-af8d-5397b1c0ce5f" containerName="route-controller-manager" containerID="cri-o://5414541643aa67ad8bda2cf263021f1e3165e41765e9788093bec52dbf3f2dda" gracePeriod=30 Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.468518 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2"] Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.574472 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85fb4db986-fbpt8" Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.633498 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/898afeea-6bdb-425f-af8d-5397b1c0ce5f-config\") pod \"898afeea-6bdb-425f-af8d-5397b1c0ce5f\" (UID: \"898afeea-6bdb-425f-af8d-5397b1c0ce5f\") " Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.633591 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/898afeea-6bdb-425f-af8d-5397b1c0ce5f-client-ca\") pod \"898afeea-6bdb-425f-af8d-5397b1c0ce5f\" (UID: \"898afeea-6bdb-425f-af8d-5397b1c0ce5f\") " Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.633654 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/898afeea-6bdb-425f-af8d-5397b1c0ce5f-serving-cert\") pod \"898afeea-6bdb-425f-af8d-5397b1c0ce5f\" (UID: \"898afeea-6bdb-425f-af8d-5397b1c0ce5f\") " Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.633738 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctb2t\" (UniqueName: \"kubernetes.io/projected/898afeea-6bdb-425f-af8d-5397b1c0ce5f-kube-api-access-ctb2t\") pod \"898afeea-6bdb-425f-af8d-5397b1c0ce5f\" (UID: \"898afeea-6bdb-425f-af8d-5397b1c0ce5f\") " Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.634543 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/898afeea-6bdb-425f-af8d-5397b1c0ce5f-client-ca" (OuterVolumeSpecName: "client-ca") pod "898afeea-6bdb-425f-af8d-5397b1c0ce5f" (UID: "898afeea-6bdb-425f-af8d-5397b1c0ce5f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.634580 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/898afeea-6bdb-425f-af8d-5397b1c0ce5f-config" (OuterVolumeSpecName: "config") pod "898afeea-6bdb-425f-af8d-5397b1c0ce5f" (UID: "898afeea-6bdb-425f-af8d-5397b1c0ce5f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.645589 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/898afeea-6bdb-425f-af8d-5397b1c0ce5f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "898afeea-6bdb-425f-af8d-5397b1c0ce5f" (UID: "898afeea-6bdb-425f-af8d-5397b1c0ce5f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.645629 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/898afeea-6bdb-425f-af8d-5397b1c0ce5f-kube-api-access-ctb2t" (OuterVolumeSpecName: "kube-api-access-ctb2t") pod "898afeea-6bdb-425f-af8d-5397b1c0ce5f" (UID: "898afeea-6bdb-425f-af8d-5397b1c0ce5f"). InnerVolumeSpecName "kube-api-access-ctb2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.683629 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" event={"ID":"4336de6c-2a59-469b-8a6d-c97c74e127b0","Type":"ContainerStarted","Data":"d7ddab11e6ca7cfa2e0f03e05fad90fc6c0336a01dd4d968a4ae2dceda305c7c"} Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.685502 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8bb46d75d-dphhm" Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.685634 4812 generic.go:334] "Generic (PLEG): container finished" podID="9c0dea8c-4cf0-448f-8438-c32062604ce4" containerID="ee9ec170bfdbce5ab17b3fb453276f69f375dcd633a99510fabf079ae001f4f9" exitCode=0 Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.685699 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8bb46d75d-dphhm" event={"ID":"9c0dea8c-4cf0-448f-8438-c32062604ce4","Type":"ContainerDied","Data":"ee9ec170bfdbce5ab17b3fb453276f69f375dcd633a99510fabf079ae001f4f9"} Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.685721 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8bb46d75d-dphhm" event={"ID":"9c0dea8c-4cf0-448f-8438-c32062604ce4","Type":"ContainerDied","Data":"17c30c47dd83299c2aeb6a1672c990ddb09de123dfce9e0e1eede86486620154"} Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.685742 4812 scope.go:117] "RemoveContainer" containerID="ee9ec170bfdbce5ab17b3fb453276f69f375dcd633a99510fabf079ae001f4f9" Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.688465 4812 generic.go:334] "Generic (PLEG): container finished" podID="898afeea-6bdb-425f-af8d-5397b1c0ce5f" containerID="5414541643aa67ad8bda2cf263021f1e3165e41765e9788093bec52dbf3f2dda" exitCode=0 Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.688516 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85fb4db986-fbpt8" event={"ID":"898afeea-6bdb-425f-af8d-5397b1c0ce5f","Type":"ContainerDied","Data":"5414541643aa67ad8bda2cf263021f1e3165e41765e9788093bec52dbf3f2dda"} Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.688534 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85fb4db986-fbpt8" event={"ID":"898afeea-6bdb-425f-af8d-5397b1c0ce5f","Type":"ContainerDied","Data":"f30bf6e794ee1178ca4e626a90acec250609b19685668a753a8076076b0b69cd"} Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.688587 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85fb4db986-fbpt8" Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.704623 4812 scope.go:117] "RemoveContainer" containerID="ee9ec170bfdbce5ab17b3fb453276f69f375dcd633a99510fabf079ae001f4f9" Feb 16 13:35:40 crc kubenswrapper[4812]: E0216 13:35:40.705239 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee9ec170bfdbce5ab17b3fb453276f69f375dcd633a99510fabf079ae001f4f9\": container with ID starting with ee9ec170bfdbce5ab17b3fb453276f69f375dcd633a99510fabf079ae001f4f9 not found: ID does not exist" containerID="ee9ec170bfdbce5ab17b3fb453276f69f375dcd633a99510fabf079ae001f4f9" Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.705277 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee9ec170bfdbce5ab17b3fb453276f69f375dcd633a99510fabf079ae001f4f9"} err="failed to get container status \"ee9ec170bfdbce5ab17b3fb453276f69f375dcd633a99510fabf079ae001f4f9\": rpc error: code = NotFound desc = could not find container \"ee9ec170bfdbce5ab17b3fb453276f69f375dcd633a99510fabf079ae001f4f9\": container with ID starting with ee9ec170bfdbce5ab17b3fb453276f69f375dcd633a99510fabf079ae001f4f9 not found: ID does not exist" Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.705320 4812 scope.go:117] "RemoveContainer" containerID="5414541643aa67ad8bda2cf263021f1e3165e41765e9788093bec52dbf3f2dda" Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.734143 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85fb4db986-fbpt8"] Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.734222 4812 scope.go:117] "RemoveContainer" containerID="5414541643aa67ad8bda2cf263021f1e3165e41765e9788093bec52dbf3f2dda" Feb 16 13:35:40 crc kubenswrapper[4812]: E0216 13:35:40.734751 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5414541643aa67ad8bda2cf263021f1e3165e41765e9788093bec52dbf3f2dda\": container with ID starting with 5414541643aa67ad8bda2cf263021f1e3165e41765e9788093bec52dbf3f2dda not found: ID does not exist" containerID="5414541643aa67ad8bda2cf263021f1e3165e41765e9788093bec52dbf3f2dda" Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.734803 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5414541643aa67ad8bda2cf263021f1e3165e41765e9788093bec52dbf3f2dda"} err="failed to get container status \"5414541643aa67ad8bda2cf263021f1e3165e41765e9788093bec52dbf3f2dda\": rpc error: code = NotFound desc = could not find container \"5414541643aa67ad8bda2cf263021f1e3165e41765e9788093bec52dbf3f2dda\": container with ID starting with 5414541643aa67ad8bda2cf263021f1e3165e41765e9788093bec52dbf3f2dda not found: ID does not exist" Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.735998 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c0dea8c-4cf0-448f-8438-c32062604ce4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9c0dea8c-4cf0-448f-8438-c32062604ce4" (UID: "9c0dea8c-4cf0-448f-8438-c32062604ce4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.736095 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c0dea8c-4cf0-448f-8438-c32062604ce4-proxy-ca-bundles\") pod \"9c0dea8c-4cf0-448f-8438-c32062604ce4\" (UID: \"9c0dea8c-4cf0-448f-8438-c32062604ce4\") " Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.736205 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c0dea8c-4cf0-448f-8438-c32062604ce4-client-ca\") pod \"9c0dea8c-4cf0-448f-8438-c32062604ce4\" (UID: \"9c0dea8c-4cf0-448f-8438-c32062604ce4\") " Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.736282 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c0dea8c-4cf0-448f-8438-c32062604ce4-config\") pod \"9c0dea8c-4cf0-448f-8438-c32062604ce4\" (UID: \"9c0dea8c-4cf0-448f-8438-c32062604ce4\") " Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.736837 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c0dea8c-4cf0-448f-8438-c32062604ce4-client-ca" (OuterVolumeSpecName: "client-ca") pod "9c0dea8c-4cf0-448f-8438-c32062604ce4" (UID: "9c0dea8c-4cf0-448f-8438-c32062604ce4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.737185 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c0dea8c-4cf0-448f-8438-c32062604ce4-config" (OuterVolumeSpecName: "config") pod "9c0dea8c-4cf0-448f-8438-c32062604ce4" (UID: "9c0dea8c-4cf0-448f-8438-c32062604ce4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.737529 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85fb4db986-fbpt8"] Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.737883 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n445v\" (UniqueName: \"kubernetes.io/projected/9c0dea8c-4cf0-448f-8438-c32062604ce4-kube-api-access-n445v\") pod \"9c0dea8c-4cf0-448f-8438-c32062604ce4\" (UID: \"9c0dea8c-4cf0-448f-8438-c32062604ce4\") " Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.738035 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c0dea8c-4cf0-448f-8438-c32062604ce4-serving-cert\") pod \"9c0dea8c-4cf0-448f-8438-c32062604ce4\" (UID: \"9c0dea8c-4cf0-448f-8438-c32062604ce4\") " Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.738598 4812 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c0dea8c-4cf0-448f-8438-c32062604ce4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.738626 4812 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c0dea8c-4cf0-448f-8438-c32062604ce4-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.738639 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c0dea8c-4cf0-448f-8438-c32062604ce4-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.738652 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctb2t\" (UniqueName: \"kubernetes.io/projected/898afeea-6bdb-425f-af8d-5397b1c0ce5f-kube-api-access-ctb2t\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.738686 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/898afeea-6bdb-425f-af8d-5397b1c0ce5f-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.738777 4812 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/898afeea-6bdb-425f-af8d-5397b1c0ce5f-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.738797 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/898afeea-6bdb-425f-af8d-5397b1c0ce5f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.740814 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c0dea8c-4cf0-448f-8438-c32062604ce4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9c0dea8c-4cf0-448f-8438-c32062604ce4" (UID: "9c0dea8c-4cf0-448f-8438-c32062604ce4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.741160 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c0dea8c-4cf0-448f-8438-c32062604ce4-kube-api-access-n445v" (OuterVolumeSpecName: "kube-api-access-n445v") pod "9c0dea8c-4cf0-448f-8438-c32062604ce4" (UID: "9c0dea8c-4cf0-448f-8438-c32062604ce4"). InnerVolumeSpecName "kube-api-access-n445v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.839857 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n445v\" (UniqueName: \"kubernetes.io/projected/9c0dea8c-4cf0-448f-8438-c32062604ce4-kube-api-access-n445v\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:40 crc kubenswrapper[4812]: I0216 13:35:40.839889 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c0dea8c-4cf0-448f-8438-c32062604ce4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.647327 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ff4d75c58-4b4dp"] Feb 16 13:35:41 crc kubenswrapper[4812]: E0216 13:35:41.647592 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c0dea8c-4cf0-448f-8438-c32062604ce4" containerName="controller-manager" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.647605 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c0dea8c-4cf0-448f-8438-c32062604ce4" containerName="controller-manager" Feb 16 13:35:41 crc kubenswrapper[4812]: E0216 13:35:41.647615 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="898afeea-6bdb-425f-af8d-5397b1c0ce5f" containerName="route-controller-manager" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.647620 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="898afeea-6bdb-425f-af8d-5397b1c0ce5f" containerName="route-controller-manager" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.647706 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c0dea8c-4cf0-448f-8438-c32062604ce4" containerName="controller-manager" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.647718 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="898afeea-6bdb-425f-af8d-5397b1c0ce5f" containerName="route-controller-manager" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.648070 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6ff4d75c58-4b4dp" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.653336 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.653567 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.653903 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.654056 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.654113 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.654160 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.657136 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-67bb748c78-2s4kz"] Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.665711 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67bb748c78-2s4kz" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.670590 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ff4d75c58-4b4dp"] Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.680163 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67bb748c78-2s4kz"] Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.694281 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8bb46d75d-dphhm" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.697274 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.697522 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" event={"ID":"4336de6c-2a59-469b-8a6d-c97c74e127b0","Type":"ContainerStarted","Data":"4a7c8eb8167073b105dc118773561a129a86b3bc21e4270cefc5367ea667a4b1"} Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.702385 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.728566 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6c8567d5c5-sdnc2" podStartSLOduration=37.728548331 podStartE2EDuration="37.728548331s" podCreationTimestamp="2026-02-16 13:35:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:35:41.727622674 +0000 UTC m=+230.791953385" watchObservedRunningTime="2026-02-16 13:35:41.728548331 +0000 UTC m=+230.792879032" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.749200 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a83734cf-f94c-4208-94cf-705c421b6c41-proxy-ca-bundles\") pod \"controller-manager-67bb748c78-2s4kz\" (UID: \"a83734cf-f94c-4208-94cf-705c421b6c41\") " pod="openshift-controller-manager/controller-manager-67bb748c78-2s4kz" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.749270 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a83734cf-f94c-4208-94cf-705c421b6c41-client-ca\") pod \"controller-manager-67bb748c78-2s4kz\" (UID: \"a83734cf-f94c-4208-94cf-705c421b6c41\") " pod="openshift-controller-manager/controller-manager-67bb748c78-2s4kz" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.749316 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e6fea7e-ca0e-4e9a-a15a-2dd1dff9a39b-serving-cert\") pod \"route-controller-manager-6ff4d75c58-4b4dp\" (UID: \"8e6fea7e-ca0e-4e9a-a15a-2dd1dff9a39b\") " pod="openshift-route-controller-manager/route-controller-manager-6ff4d75c58-4b4dp" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.749364 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e6fea7e-ca0e-4e9a-a15a-2dd1dff9a39b-config\") pod \"route-controller-manager-6ff4d75c58-4b4dp\" (UID: \"8e6fea7e-ca0e-4e9a-a15a-2dd1dff9a39b\") " pod="openshift-route-controller-manager/route-controller-manager-6ff4d75c58-4b4dp" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.749405 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8e6fea7e-ca0e-4e9a-a15a-2dd1dff9a39b-client-ca\") pod \"route-controller-manager-6ff4d75c58-4b4dp\" (UID: \"8e6fea7e-ca0e-4e9a-a15a-2dd1dff9a39b\") " pod="openshift-route-controller-manager/route-controller-manager-6ff4d75c58-4b4dp" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.749426 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a83734cf-f94c-4208-94cf-705c421b6c41-serving-cert\") pod \"controller-manager-67bb748c78-2s4kz\" (UID: \"a83734cf-f94c-4208-94cf-705c421b6c41\") " pod="openshift-controller-manager/controller-manager-67bb748c78-2s4kz" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.749488 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87vn6\" (UniqueName: \"kubernetes.io/projected/8e6fea7e-ca0e-4e9a-a15a-2dd1dff9a39b-kube-api-access-87vn6\") pod \"route-controller-manager-6ff4d75c58-4b4dp\" (UID: \"8e6fea7e-ca0e-4e9a-a15a-2dd1dff9a39b\") " pod="openshift-route-controller-manager/route-controller-manager-6ff4d75c58-4b4dp" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.749551 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a83734cf-f94c-4208-94cf-705c421b6c41-config\") pod \"controller-manager-67bb748c78-2s4kz\" (UID: \"a83734cf-f94c-4208-94cf-705c421b6c41\") " pod="openshift-controller-manager/controller-manager-67bb748c78-2s4kz" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.749615 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggngv\" (UniqueName: \"kubernetes.io/projected/a83734cf-f94c-4208-94cf-705c421b6c41-kube-api-access-ggngv\") pod \"controller-manager-67bb748c78-2s4kz\" (UID: \"a83734cf-f94c-4208-94cf-705c421b6c41\") " pod="openshift-controller-manager/controller-manager-67bb748c78-2s4kz" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.763961 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8bb46d75d-dphhm"] Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.772016 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-8bb46d75d-dphhm"] Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.850640 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a83734cf-f94c-4208-94cf-705c421b6c41-proxy-ca-bundles\") pod \"controller-manager-67bb748c78-2s4kz\" (UID: \"a83734cf-f94c-4208-94cf-705c421b6c41\") " pod="openshift-controller-manager/controller-manager-67bb748c78-2s4kz" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.850938 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a83734cf-f94c-4208-94cf-705c421b6c41-client-ca\") pod \"controller-manager-67bb748c78-2s4kz\" (UID: \"a83734cf-f94c-4208-94cf-705c421b6c41\") " pod="openshift-controller-manager/controller-manager-67bb748c78-2s4kz" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.850968 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e6fea7e-ca0e-4e9a-a15a-2dd1dff9a39b-serving-cert\") pod \"route-controller-manager-6ff4d75c58-4b4dp\" (UID: \"8e6fea7e-ca0e-4e9a-a15a-2dd1dff9a39b\") " pod="openshift-route-controller-manager/route-controller-manager-6ff4d75c58-4b4dp" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.850989 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e6fea7e-ca0e-4e9a-a15a-2dd1dff9a39b-config\") pod \"route-controller-manager-6ff4d75c58-4b4dp\" (UID: \"8e6fea7e-ca0e-4e9a-a15a-2dd1dff9a39b\") " pod="openshift-route-controller-manager/route-controller-manager-6ff4d75c58-4b4dp" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.851019 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8e6fea7e-ca0e-4e9a-a15a-2dd1dff9a39b-client-ca\") pod \"route-controller-manager-6ff4d75c58-4b4dp\" (UID: \"8e6fea7e-ca0e-4e9a-a15a-2dd1dff9a39b\") " pod="openshift-route-controller-manager/route-controller-manager-6ff4d75c58-4b4dp" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.851034 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a83734cf-f94c-4208-94cf-705c421b6c41-serving-cert\") pod \"controller-manager-67bb748c78-2s4kz\" (UID: \"a83734cf-f94c-4208-94cf-705c421b6c41\") " pod="openshift-controller-manager/controller-manager-67bb748c78-2s4kz" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.851050 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87vn6\" (UniqueName: \"kubernetes.io/projected/8e6fea7e-ca0e-4e9a-a15a-2dd1dff9a39b-kube-api-access-87vn6\") pod \"route-controller-manager-6ff4d75c58-4b4dp\" (UID: \"8e6fea7e-ca0e-4e9a-a15a-2dd1dff9a39b\") " pod="openshift-route-controller-manager/route-controller-manager-6ff4d75c58-4b4dp" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.851073 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a83734cf-f94c-4208-94cf-705c421b6c41-config\") pod \"controller-manager-67bb748c78-2s4kz\" (UID: \"a83734cf-f94c-4208-94cf-705c421b6c41\") " pod="openshift-controller-manager/controller-manager-67bb748c78-2s4kz" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.851096 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggngv\" (UniqueName: \"kubernetes.io/projected/a83734cf-f94c-4208-94cf-705c421b6c41-kube-api-access-ggngv\") pod \"controller-manager-67bb748c78-2s4kz\" (UID: \"a83734cf-f94c-4208-94cf-705c421b6c41\") " pod="openshift-controller-manager/controller-manager-67bb748c78-2s4kz" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.852203 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8e6fea7e-ca0e-4e9a-a15a-2dd1dff9a39b-client-ca\") pod \"route-controller-manager-6ff4d75c58-4b4dp\" (UID: \"8e6fea7e-ca0e-4e9a-a15a-2dd1dff9a39b\") " pod="openshift-route-controller-manager/route-controller-manager-6ff4d75c58-4b4dp" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.852223 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a83734cf-f94c-4208-94cf-705c421b6c41-client-ca\") pod \"controller-manager-67bb748c78-2s4kz\" (UID: \"a83734cf-f94c-4208-94cf-705c421b6c41\") " pod="openshift-controller-manager/controller-manager-67bb748c78-2s4kz" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.852400 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e6fea7e-ca0e-4e9a-a15a-2dd1dff9a39b-config\") pod \"route-controller-manager-6ff4d75c58-4b4dp\" (UID: \"8e6fea7e-ca0e-4e9a-a15a-2dd1dff9a39b\") " pod="openshift-route-controller-manager/route-controller-manager-6ff4d75c58-4b4dp" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.853025 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a83734cf-f94c-4208-94cf-705c421b6c41-proxy-ca-bundles\") pod \"controller-manager-67bb748c78-2s4kz\" (UID: \"a83734cf-f94c-4208-94cf-705c421b6c41\") " pod="openshift-controller-manager/controller-manager-67bb748c78-2s4kz" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.853058 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a83734cf-f94c-4208-94cf-705c421b6c41-config\") pod \"controller-manager-67bb748c78-2s4kz\" (UID: \"a83734cf-f94c-4208-94cf-705c421b6c41\") " pod="openshift-controller-manager/controller-manager-67bb748c78-2s4kz" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.854933 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e6fea7e-ca0e-4e9a-a15a-2dd1dff9a39b-serving-cert\") pod \"route-controller-manager-6ff4d75c58-4b4dp\" (UID: \"8e6fea7e-ca0e-4e9a-a15a-2dd1dff9a39b\") " pod="openshift-route-controller-manager/route-controller-manager-6ff4d75c58-4b4dp" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.866568 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a83734cf-f94c-4208-94cf-705c421b6c41-serving-cert\") pod \"controller-manager-67bb748c78-2s4kz\" (UID: \"a83734cf-f94c-4208-94cf-705c421b6c41\") " pod="openshift-controller-manager/controller-manager-67bb748c78-2s4kz" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.868848 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87vn6\" (UniqueName: \"kubernetes.io/projected/8e6fea7e-ca0e-4e9a-a15a-2dd1dff9a39b-kube-api-access-87vn6\") pod \"route-controller-manager-6ff4d75c58-4b4dp\" (UID: \"8e6fea7e-ca0e-4e9a-a15a-2dd1dff9a39b\") " pod="openshift-route-controller-manager/route-controller-manager-6ff4d75c58-4b4dp" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.896644 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggngv\" (UniqueName: \"kubernetes.io/projected/a83734cf-f94c-4208-94cf-705c421b6c41-kube-api-access-ggngv\") pod \"controller-manager-67bb748c78-2s4kz\" (UID: \"a83734cf-f94c-4208-94cf-705c421b6c41\") " pod="openshift-controller-manager/controller-manager-67bb748c78-2s4kz" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.899237 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="898afeea-6bdb-425f-af8d-5397b1c0ce5f" path="/var/lib/kubelet/pods/898afeea-6bdb-425f-af8d-5397b1c0ce5f/volumes" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.900139 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c0dea8c-4cf0-448f-8438-c32062604ce4" path="/var/lib/kubelet/pods/9c0dea8c-4cf0-448f-8438-c32062604ce4/volumes" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.967177 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6ff4d75c58-4b4dp" Feb 16 13:35:41 crc kubenswrapper[4812]: I0216 13:35:41.992371 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67bb748c78-2s4kz" Feb 16 13:35:42 crc kubenswrapper[4812]: I0216 13:35:42.399574 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67bb748c78-2s4kz"] Feb 16 13:35:42 crc kubenswrapper[4812]: I0216 13:35:42.448694 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ff4d75c58-4b4dp"] Feb 16 13:35:42 crc kubenswrapper[4812]: I0216 13:35:42.702539 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6ff4d75c58-4b4dp" event={"ID":"8e6fea7e-ca0e-4e9a-a15a-2dd1dff9a39b","Type":"ContainerStarted","Data":"c3e64e85b9ef61ec852b96cd4eb4101a0fce016f88f17bb7c16a295c00b4017b"} Feb 16 13:35:42 crc kubenswrapper[4812]: I0216 13:35:42.702593 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6ff4d75c58-4b4dp" event={"ID":"8e6fea7e-ca0e-4e9a-a15a-2dd1dff9a39b","Type":"ContainerStarted","Data":"e486b93b40a736d0f5f169fbe849bb344bc1aa56a75f691a4aee6c1457c7355c"} Feb 16 13:35:42 crc kubenswrapper[4812]: I0216 13:35:42.702737 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6ff4d75c58-4b4dp" Feb 16 13:35:42 crc kubenswrapper[4812]: I0216 13:35:42.703697 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67bb748c78-2s4kz" event={"ID":"a83734cf-f94c-4208-94cf-705c421b6c41","Type":"ContainerStarted","Data":"15011155e9e9d9613c70bca57917e5c267da92288c631cd093618df6c975b0ed"} Feb 16 13:35:42 crc kubenswrapper[4812]: I0216 13:35:42.703722 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67bb748c78-2s4kz" event={"ID":"a83734cf-f94c-4208-94cf-705c421b6c41","Type":"ContainerStarted","Data":"e04634f8dc6266ec933b657d995f9b0e52573824f902a7eac2037fd9e6409ecc"} Feb 16 13:35:42 crc kubenswrapper[4812]: I0216 13:35:42.721923 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6ff4d75c58-4b4dp" podStartSLOduration=2.721904971 podStartE2EDuration="2.721904971s" podCreationTimestamp="2026-02-16 13:35:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:35:42.717683315 +0000 UTC m=+231.782014016" watchObservedRunningTime="2026-02-16 13:35:42.721904971 +0000 UTC m=+231.786235682" Feb 16 13:35:42 crc kubenswrapper[4812]: I0216 13:35:42.739261 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-67bb748c78-2s4kz" podStartSLOduration=2.7392435170000002 podStartE2EDuration="2.739243517s" podCreationTimestamp="2026-02-16 13:35:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:35:42.738582967 +0000 UTC m=+231.802913668" watchObservedRunningTime="2026-02-16 13:35:42.739243517 +0000 UTC m=+231.803574218" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.066706 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6ff4d75c58-4b4dp" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.708867 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-67bb748c78-2s4kz" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.711634 4812 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.712556 4812 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.712717 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.712838 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca" gracePeriod=15 Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.712877 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://9f92165c7178abd9e054861a9eb371353d050b922ba46c2a29027462832abff6" gracePeriod=15 Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.712956 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035" gracePeriod=15 Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.712977 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea" gracePeriod=15 Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.713098 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138" gracePeriod=15 Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.715425 4812 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 13:35:43 crc kubenswrapper[4812]: E0216 13:35:43.715618 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.715634 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 13:35:43 crc kubenswrapper[4812]: E0216 13:35:43.715647 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.715655 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 13:35:43 crc kubenswrapper[4812]: E0216 13:35:43.715666 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.715673 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 13:35:43 crc kubenswrapper[4812]: E0216 13:35:43.715688 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.715695 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 16 13:35:43 crc kubenswrapper[4812]: E0216 13:35:43.715707 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.715715 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 13:35:43 crc kubenswrapper[4812]: E0216 13:35:43.715727 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.715735 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 13:35:43 crc kubenswrapper[4812]: E0216 13:35:43.715745 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.715751 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.715857 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.715869 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.715881 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.715892 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.715902 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.715910 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 13:35:43 crc kubenswrapper[4812]: E0216 13:35:43.716024 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.716034 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.716148 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.724303 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-67bb748c78-2s4kz" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.731340 4812 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.769266 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.872975 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.873023 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.873076 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.873255 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.873297 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.873347 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.873385 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.873455 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 13:35:43 crc kubenswrapper[4812]: E0216 13:35:43.912109 4812 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-conmon-3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-conmon-2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-conmon-9f92165c7178abd9e054861a9eb371353d050b922ba46c2a29027462832abff6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035.scope\": RecentStats: unable to find data in memory cache]" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.975026 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.975083 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.975103 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.975121 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.975145 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.975170 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.975221 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.975255 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.975319 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.975355 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.975376 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.975396 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.975416 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.975435 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.975469 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 13:35:43 crc kubenswrapper[4812]: I0216 13:35:43.975557 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 13:35:44 crc kubenswrapper[4812]: I0216 13:35:44.067632 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 13:35:44 crc kubenswrapper[4812]: E0216 13:35:44.095177 4812 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.252:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1894bd87b986d4f3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 13:35:44.094303475 +0000 UTC m=+233.158634176,LastTimestamp:2026-02-16 13:35:44.094303475 +0000 UTC m=+233.158634176,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 13:35:44 crc kubenswrapper[4812]: I0216 13:35:44.716413 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"66d8cb48d213bcf59ce26660afb8b507b8f3ad2d1329acf007b1d8f94715f0b9"} Feb 16 13:35:44 crc kubenswrapper[4812]: I0216 13:35:44.716809 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"1180ed608577a7ee433f4a16d41d30ff061cf58780644cce3ff2aee0073d14be"} Feb 16 13:35:44 crc kubenswrapper[4812]: I0216 13:35:44.718547 4812 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.252:6443: connect: connection refused" Feb 16 13:35:44 crc kubenswrapper[4812]: I0216 13:35:44.720782 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 16 13:35:44 crc kubenswrapper[4812]: I0216 13:35:44.722181 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 13:35:44 crc kubenswrapper[4812]: I0216 13:35:44.722821 4812 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9f92165c7178abd9e054861a9eb371353d050b922ba46c2a29027462832abff6" exitCode=0 Feb 16 13:35:44 crc kubenswrapper[4812]: I0216 13:35:44.722850 4812 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea" exitCode=0 Feb 16 13:35:44 crc kubenswrapper[4812]: I0216 13:35:44.722861 4812 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138" exitCode=0 Feb 16 13:35:44 crc kubenswrapper[4812]: I0216 13:35:44.722864 4812 scope.go:117] "RemoveContainer" containerID="4e6028a4f8a5a893a7e53baf1480eac54c71e618a8d681b7874c00f3a4f2d0be" Feb 16 13:35:44 crc kubenswrapper[4812]: I0216 13:35:44.722872 4812 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035" exitCode=2 Feb 16 13:35:44 crc kubenswrapper[4812]: I0216 13:35:44.726366 4812 generic.go:334] "Generic (PLEG): container finished" podID="e4158c95-a923-4240-a8bc-f9c44270275e" containerID="056a53ea3f3ece5be2b8c240f485b975b2ff0a4875f941bf118b4388f6ccbbbe" exitCode=0 Feb 16 13:35:44 crc kubenswrapper[4812]: I0216 13:35:44.726509 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e4158c95-a923-4240-a8bc-f9c44270275e","Type":"ContainerDied","Data":"056a53ea3f3ece5be2b8c240f485b975b2ff0a4875f941bf118b4388f6ccbbbe"} Feb 16 13:35:44 crc kubenswrapper[4812]: I0216 13:35:44.727224 4812 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.252:6443: connect: connection refused" Feb 16 13:35:44 crc kubenswrapper[4812]: I0216 13:35:44.727548 4812 status_manager.go:851] "Failed to get status for pod" podUID="e4158c95-a923-4240-a8bc-f9c44270275e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.252:6443: connect: connection refused" Feb 16 13:35:45 crc kubenswrapper[4812]: I0216 13:35:45.741194 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.100150 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.102875 4812 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.252:6443: connect: connection refused" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.103480 4812 status_manager.go:851] "Failed to get status for pod" podUID="e4158c95-a923-4240-a8bc-f9c44270275e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.252:6443: connect: connection refused" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.200549 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.201706 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.202604 4812 status_manager.go:851] "Failed to get status for pod" podUID="e4158c95-a923-4240-a8bc-f9c44270275e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.252:6443: connect: connection refused" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.203310 4812 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.252:6443: connect: connection refused" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.204241 4812 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.252:6443: connect: connection refused" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.211179 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4158c95-a923-4240-a8bc-f9c44270275e-kube-api-access\") pod \"e4158c95-a923-4240-a8bc-f9c44270275e\" (UID: \"e4158c95-a923-4240-a8bc-f9c44270275e\") " Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.211569 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e4158c95-a923-4240-a8bc-f9c44270275e-var-lock\") pod \"e4158c95-a923-4240-a8bc-f9c44270275e\" (UID: \"e4158c95-a923-4240-a8bc-f9c44270275e\") " Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.211656 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e4158c95-a923-4240-a8bc-f9c44270275e-kubelet-dir\") pod \"e4158c95-a923-4240-a8bc-f9c44270275e\" (UID: \"e4158c95-a923-4240-a8bc-f9c44270275e\") " Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.211920 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4158c95-a923-4240-a8bc-f9c44270275e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e4158c95-a923-4240-a8bc-f9c44270275e" (UID: "e4158c95-a923-4240-a8bc-f9c44270275e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.211935 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4158c95-a923-4240-a8bc-f9c44270275e-var-lock" (OuterVolumeSpecName: "var-lock") pod "e4158c95-a923-4240-a8bc-f9c44270275e" (UID: "e4158c95-a923-4240-a8bc-f9c44270275e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.217643 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4158c95-a923-4240-a8bc-f9c44270275e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e4158c95-a923-4240-a8bc-f9c44270275e" (UID: "e4158c95-a923-4240-a8bc-f9c44270275e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.313124 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.313234 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.313479 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.313579 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.313635 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.313800 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.314193 4812 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e4158c95-a923-4240-a8bc-f9c44270275e-var-lock\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.314242 4812 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e4158c95-a923-4240-a8bc-f9c44270275e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.314272 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4158c95-a923-4240-a8bc-f9c44270275e-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.314301 4812 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.314326 4812 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.314355 4812 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.752032 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.753109 4812 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca" exitCode=0 Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.753193 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.753194 4812 scope.go:117] "RemoveContainer" containerID="9f92165c7178abd9e054861a9eb371353d050b922ba46c2a29027462832abff6" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.756181 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e4158c95-a923-4240-a8bc-f9c44270275e","Type":"ContainerDied","Data":"a312e14b6eac708160181cb07b0ae5eb045f3a0deccaf3d61f848e277857b7ba"} Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.756223 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a312e14b6eac708160181cb07b0ae5eb045f3a0deccaf3d61f848e277857b7ba" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.756303 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.769214 4812 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.252:6443: connect: connection refused" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.769954 4812 status_manager.go:851] "Failed to get status for pod" podUID="e4158c95-a923-4240-a8bc-f9c44270275e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.252:6443: connect: connection refused" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.770513 4812 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.252:6443: connect: connection refused" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.775703 4812 status_manager.go:851] "Failed to get status for pod" podUID="e4158c95-a923-4240-a8bc-f9c44270275e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.252:6443: connect: connection refused" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.776319 4812 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.252:6443: connect: connection refused" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.776572 4812 scope.go:117] "RemoveContainer" containerID="3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.776975 4812 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.252:6443: connect: connection refused" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.795933 4812 scope.go:117] "RemoveContainer" containerID="1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.815402 4812 scope.go:117] "RemoveContainer" containerID="2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.840000 4812 scope.go:117] "RemoveContainer" containerID="b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.856310 4812 scope.go:117] "RemoveContainer" containerID="c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.878229 4812 scope.go:117] "RemoveContainer" containerID="9f92165c7178abd9e054861a9eb371353d050b922ba46c2a29027462832abff6" Feb 16 13:35:46 crc kubenswrapper[4812]: E0216 13:35:46.879110 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f92165c7178abd9e054861a9eb371353d050b922ba46c2a29027462832abff6\": container with ID starting with 9f92165c7178abd9e054861a9eb371353d050b922ba46c2a29027462832abff6 not found: ID does not exist" containerID="9f92165c7178abd9e054861a9eb371353d050b922ba46c2a29027462832abff6" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.879147 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f92165c7178abd9e054861a9eb371353d050b922ba46c2a29027462832abff6"} err="failed to get container status \"9f92165c7178abd9e054861a9eb371353d050b922ba46c2a29027462832abff6\": rpc error: code = NotFound desc = could not find container \"9f92165c7178abd9e054861a9eb371353d050b922ba46c2a29027462832abff6\": container with ID starting with 9f92165c7178abd9e054861a9eb371353d050b922ba46c2a29027462832abff6 not found: ID does not exist" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.879173 4812 scope.go:117] "RemoveContainer" containerID="3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea" Feb 16 13:35:46 crc kubenswrapper[4812]: E0216 13:35:46.879419 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\": container with ID starting with 3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea not found: ID does not exist" containerID="3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.879465 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea"} err="failed to get container status \"3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\": rpc error: code = NotFound desc = could not find container \"3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea\": container with ID starting with 3eb03abe122caff5d5c567507b7f760f27726497197b3244f9e3efa2f54fdfea not found: ID does not exist" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.879489 4812 scope.go:117] "RemoveContainer" containerID="1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138" Feb 16 13:35:46 crc kubenswrapper[4812]: E0216 13:35:46.880056 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\": container with ID starting with 1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138 not found: ID does not exist" containerID="1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.880077 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138"} err="failed to get container status \"1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\": rpc error: code = NotFound desc = could not find container \"1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138\": container with ID starting with 1e25b3d736303d64fcee90c1d9474cd1d738a9ed8ea7b9193cfb7598279ec138 not found: ID does not exist" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.880089 4812 scope.go:117] "RemoveContainer" containerID="2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035" Feb 16 13:35:46 crc kubenswrapper[4812]: E0216 13:35:46.880654 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\": container with ID starting with 2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035 not found: ID does not exist" containerID="2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.880676 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035"} err="failed to get container status \"2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\": rpc error: code = NotFound desc = could not find container \"2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035\": container with ID starting with 2a925bae9b25f1fca6ef0797ad523a77b70878ae2f9e7a5e7860f6f63585a035 not found: ID does not exist" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.880690 4812 scope.go:117] "RemoveContainer" containerID="b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca" Feb 16 13:35:46 crc kubenswrapper[4812]: E0216 13:35:46.881270 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\": container with ID starting with b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca not found: ID does not exist" containerID="b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.881307 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca"} err="failed to get container status \"b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\": rpc error: code = NotFound desc = could not find container \"b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca\": container with ID starting with b73c61f4b1fa4f0c4f233d424ee7ccda2a6c28fd17a101a30819be8537bf78ca not found: ID does not exist" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.881349 4812 scope.go:117] "RemoveContainer" containerID="c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea" Feb 16 13:35:46 crc kubenswrapper[4812]: E0216 13:35:46.881961 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\": container with ID starting with c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea not found: ID does not exist" containerID="c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea" Feb 16 13:35:46 crc kubenswrapper[4812]: I0216 13:35:46.881990 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea"} err="failed to get container status \"c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\": rpc error: code = NotFound desc = could not find container \"c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea\": container with ID starting with c4849604df49bbed5fa784b8c8b0692e6a2bae9748fdc160425e846e628df8ea not found: ID does not exist" Feb 16 13:35:47 crc kubenswrapper[4812]: I0216 13:35:47.886839 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 16 13:35:49 crc kubenswrapper[4812]: E0216 13:35:49.231060 4812 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.252:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1894bd87b986d4f3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 13:35:44.094303475 +0000 UTC m=+233.158634176,LastTimestamp:2026-02-16 13:35:44.094303475 +0000 UTC m=+233.158634176,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 13:35:51 crc kubenswrapper[4812]: I0216 13:35:51.882701 4812 status_manager.go:851] "Failed to get status for pod" podUID="e4158c95-a923-4240-a8bc-f9c44270275e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.252:6443: connect: connection refused" Feb 16 13:35:51 crc kubenswrapper[4812]: I0216 13:35:51.883247 4812 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.252:6443: connect: connection refused" Feb 16 13:35:52 crc kubenswrapper[4812]: E0216 13:35:52.231102 4812 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.252:6443: connect: connection refused" Feb 16 13:35:52 crc kubenswrapper[4812]: E0216 13:35:52.231733 4812 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.252:6443: connect: connection refused" Feb 16 13:35:52 crc kubenswrapper[4812]: E0216 13:35:52.232085 4812 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.252:6443: connect: connection refused" Feb 16 13:35:52 crc kubenswrapper[4812]: E0216 13:35:52.232410 4812 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.252:6443: connect: connection refused" Feb 16 13:35:52 crc kubenswrapper[4812]: E0216 13:35:52.232848 4812 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.252:6443: connect: connection refused" Feb 16 13:35:52 crc kubenswrapper[4812]: I0216 13:35:52.232924 4812 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 16 13:35:52 crc kubenswrapper[4812]: E0216 13:35:52.233506 4812 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.252:6443: connect: connection refused" interval="200ms" Feb 16 13:35:52 crc kubenswrapper[4812]: E0216 13:35:52.434811 4812 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.252:6443: connect: connection refused" interval="400ms" Feb 16 13:35:52 crc kubenswrapper[4812]: E0216 13:35:52.835670 4812 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.252:6443: connect: connection refused" interval="800ms" Feb 16 13:35:53 crc kubenswrapper[4812]: E0216 13:35:53.636463 4812 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.252:6443: connect: connection refused" interval="1.6s" Feb 16 13:35:54 crc kubenswrapper[4812]: I0216 13:35:54.901031 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 13:35:54 crc kubenswrapper[4812]: I0216 13:35:54.904024 4812 status_manager.go:851] "Failed to get status for pod" podUID="e4158c95-a923-4240-a8bc-f9c44270275e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.252:6443: connect: connection refused" Feb 16 13:35:54 crc kubenswrapper[4812]: I0216 13:35:54.904518 4812 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.252:6443: connect: connection refused" Feb 16 13:35:54 crc kubenswrapper[4812]: I0216 13:35:54.918516 4812 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a4b29081-a34f-4671-85a3-e1bc2b16d37f" Feb 16 13:35:54 crc kubenswrapper[4812]: I0216 13:35:54.918554 4812 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a4b29081-a34f-4671-85a3-e1bc2b16d37f" Feb 16 13:35:54 crc kubenswrapper[4812]: E0216 13:35:54.919094 4812 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.252:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 13:35:54 crc kubenswrapper[4812]: I0216 13:35:54.919842 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 13:35:55 crc kubenswrapper[4812]: E0216 13:35:55.237369 4812 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.252:6443: connect: connection refused" interval="3.2s" Feb 16 13:35:55 crc kubenswrapper[4812]: I0216 13:35:55.863894 4812 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="9a01559b4c7bd65f07cfeebdad5731f19a4287cb0e48c2bca9cb463e9bab89ea" exitCode=0 Feb 16 13:35:55 crc kubenswrapper[4812]: I0216 13:35:55.863966 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"9a01559b4c7bd65f07cfeebdad5731f19a4287cb0e48c2bca9cb463e9bab89ea"} Feb 16 13:35:55 crc kubenswrapper[4812]: I0216 13:35:55.864009 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"424b23ee2f5abfb0508731796fa98dc37067d3aa5cf7daeaee955a736d4eaf5c"} Feb 16 13:35:55 crc kubenswrapper[4812]: I0216 13:35:55.864507 4812 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a4b29081-a34f-4671-85a3-e1bc2b16d37f" Feb 16 13:35:55 crc kubenswrapper[4812]: I0216 13:35:55.864550 4812 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a4b29081-a34f-4671-85a3-e1bc2b16d37f" Feb 16 13:35:55 crc kubenswrapper[4812]: E0216 13:35:55.865332 4812 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.252:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 13:35:55 crc kubenswrapper[4812]: I0216 13:35:55.865356 4812 status_manager.go:851] "Failed to get status for pod" podUID="e4158c95-a923-4240-a8bc-f9c44270275e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.252:6443: connect: connection refused" Feb 16 13:35:55 crc kubenswrapper[4812]: I0216 13:35:55.865982 4812 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.252:6443: connect: connection refused" Feb 16 13:35:56 crc kubenswrapper[4812]: I0216 13:35:56.871160 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1c941b2e17d6c6a9116712fe991772ee9f9ae4f1137628a168dec6187fb5e003"} Feb 16 13:35:56 crc kubenswrapper[4812]: I0216 13:35:56.871483 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"21963140af587cf031ce5911b0f49bebf1411743e8b7b4965d9c975e07bbd004"} Feb 16 13:35:56 crc kubenswrapper[4812]: I0216 13:35:56.871498 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"30eac98d8d785a0c4d627f68ad721c0a4fc4d362c725adcbaead0348439cabf7"} Feb 16 13:35:56 crc kubenswrapper[4812]: I0216 13:35:56.871509 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7a87b132e4aa61287eec03a2466b74130e682bd0677ac7ff10d198e3526c1bb4"} Feb 16 13:35:57 crc kubenswrapper[4812]: I0216 13:35:57.889368 4812 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a4b29081-a34f-4671-85a3-e1bc2b16d37f" Feb 16 13:35:57 crc kubenswrapper[4812]: I0216 13:35:57.889430 4812 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a4b29081-a34f-4671-85a3-e1bc2b16d37f" Feb 16 13:35:57 crc kubenswrapper[4812]: I0216 13:35:57.890697 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 13:35:57 crc kubenswrapper[4812]: I0216 13:35:57.890743 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c951f24b5a55db5aac5bea62b17196d1d4e68e414e42fb1f5c68cee438299148"} Feb 16 13:35:58 crc kubenswrapper[4812]: I0216 13:35:58.898573 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 16 13:35:58 crc kubenswrapper[4812]: I0216 13:35:58.898889 4812 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="f5304f578aa4297d0e9e57669cb571d178fada6972553487f8f3e6a6a2422175" exitCode=1 Feb 16 13:35:58 crc kubenswrapper[4812]: I0216 13:35:58.898926 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"f5304f578aa4297d0e9e57669cb571d178fada6972553487f8f3e6a6a2422175"} Feb 16 13:35:58 crc kubenswrapper[4812]: I0216 13:35:58.899431 4812 scope.go:117] "RemoveContainer" containerID="f5304f578aa4297d0e9e57669cb571d178fada6972553487f8f3e6a6a2422175" Feb 16 13:35:59 crc kubenswrapper[4812]: I0216 13:35:59.001652 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 13:35:59 crc kubenswrapper[4812]: I0216 13:35:59.919957 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 13:35:59 crc kubenswrapper[4812]: I0216 13:35:59.920009 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 13:35:59 crc kubenswrapper[4812]: I0216 13:35:59.920741 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 16 13:35:59 crc kubenswrapper[4812]: I0216 13:35:59.920804 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"17eb6ee219e607497a876f7ab0cdc30a571866173c40156557556cd081fa67e8"} Feb 16 13:35:59 crc kubenswrapper[4812]: I0216 13:35:59.926028 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 13:36:02 crc kubenswrapper[4812]: I0216 13:36:02.949407 4812 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 13:36:03 crc kubenswrapper[4812]: I0216 13:36:03.017237 4812 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="6f14add8-2250-426a-9bba-07d9dac7d56e" Feb 16 13:36:03 crc kubenswrapper[4812]: I0216 13:36:03.940533 4812 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a4b29081-a34f-4671-85a3-e1bc2b16d37f" Feb 16 13:36:03 crc kubenswrapper[4812]: I0216 13:36:03.940876 4812 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a4b29081-a34f-4671-85a3-e1bc2b16d37f" Feb 16 13:36:03 crc kubenswrapper[4812]: I0216 13:36:03.943401 4812 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="6f14add8-2250-426a-9bba-07d9dac7d56e" Feb 16 13:36:03 crc kubenswrapper[4812]: I0216 13:36:03.943935 4812 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://7a87b132e4aa61287eec03a2466b74130e682bd0677ac7ff10d198e3526c1bb4" Feb 16 13:36:03 crc kubenswrapper[4812]: I0216 13:36:03.943960 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 13:36:04 crc kubenswrapper[4812]: I0216 13:36:04.224796 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 13:36:04 crc kubenswrapper[4812]: I0216 13:36:04.228927 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 13:36:04 crc kubenswrapper[4812]: I0216 13:36:04.945901 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 13:36:04 crc kubenswrapper[4812]: I0216 13:36:04.945958 4812 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a4b29081-a34f-4671-85a3-e1bc2b16d37f" Feb 16 13:36:04 crc kubenswrapper[4812]: I0216 13:36:04.945987 4812 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a4b29081-a34f-4671-85a3-e1bc2b16d37f" Feb 16 13:36:04 crc kubenswrapper[4812]: I0216 13:36:04.951229 4812 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="6f14add8-2250-426a-9bba-07d9dac7d56e" Feb 16 13:36:09 crc kubenswrapper[4812]: I0216 13:36:09.008602 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 13:36:10 crc kubenswrapper[4812]: I0216 13:36:10.943179 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 13:36:11 crc kubenswrapper[4812]: I0216 13:36:11.060095 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 13:36:11 crc kubenswrapper[4812]: I0216 13:36:11.132547 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 13:36:11 crc kubenswrapper[4812]: I0216 13:36:11.712243 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 13:36:12 crc kubenswrapper[4812]: I0216 13:36:12.792395 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 13:36:12 crc kubenswrapper[4812]: I0216 13:36:12.953260 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 16 13:36:14 crc kubenswrapper[4812]: I0216 13:36:14.917747 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 13:36:14 crc kubenswrapper[4812]: I0216 13:36:14.998192 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 16 13:36:15 crc kubenswrapper[4812]: I0216 13:36:15.013757 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 13:36:15 crc kubenswrapper[4812]: I0216 13:36:15.086662 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 13:36:15 crc kubenswrapper[4812]: I0216 13:36:15.108663 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 16 13:36:15 crc kubenswrapper[4812]: I0216 13:36:15.150511 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 13:36:15 crc kubenswrapper[4812]: I0216 13:36:15.271406 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 13:36:15 crc kubenswrapper[4812]: I0216 13:36:15.349112 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 13:36:15 crc kubenswrapper[4812]: I0216 13:36:15.668049 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 13:36:16 crc kubenswrapper[4812]: I0216 13:36:16.277396 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 13:36:16 crc kubenswrapper[4812]: I0216 13:36:16.353816 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 13:36:16 crc kubenswrapper[4812]: I0216 13:36:16.552115 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 13:36:16 crc kubenswrapper[4812]: I0216 13:36:16.560232 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 16 13:36:16 crc kubenswrapper[4812]: I0216 13:36:16.646257 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 13:36:16 crc kubenswrapper[4812]: I0216 13:36:16.647257 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 13:36:16 crc kubenswrapper[4812]: I0216 13:36:16.651670 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 13:36:16 crc kubenswrapper[4812]: I0216 13:36:16.652588 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 13:36:17 crc kubenswrapper[4812]: I0216 13:36:17.099989 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 13:36:17 crc kubenswrapper[4812]: I0216 13:36:17.370844 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 13:36:17 crc kubenswrapper[4812]: I0216 13:36:17.515791 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 13:36:17 crc kubenswrapper[4812]: I0216 13:36:17.581161 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 13:36:17 crc kubenswrapper[4812]: I0216 13:36:17.699222 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 16 13:36:17 crc kubenswrapper[4812]: I0216 13:36:17.723864 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 13:36:17 crc kubenswrapper[4812]: I0216 13:36:17.932464 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 16 13:36:17 crc kubenswrapper[4812]: I0216 13:36:17.965945 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 13:36:18 crc kubenswrapper[4812]: I0216 13:36:18.141751 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 13:36:18 crc kubenswrapper[4812]: I0216 13:36:18.235670 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 13:36:18 crc kubenswrapper[4812]: I0216 13:36:18.314354 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 13:36:18 crc kubenswrapper[4812]: I0216 13:36:18.335613 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 13:36:18 crc kubenswrapper[4812]: I0216 13:36:18.386051 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 13:36:18 crc kubenswrapper[4812]: I0216 13:36:18.389119 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 16 13:36:18 crc kubenswrapper[4812]: I0216 13:36:18.420035 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 13:36:18 crc kubenswrapper[4812]: I0216 13:36:18.510268 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 16 13:36:18 crc kubenswrapper[4812]: I0216 13:36:18.511233 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 13:36:18 crc kubenswrapper[4812]: I0216 13:36:18.549562 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 13:36:18 crc kubenswrapper[4812]: I0216 13:36:18.666374 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 13:36:18 crc kubenswrapper[4812]: I0216 13:36:18.866627 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 13:36:18 crc kubenswrapper[4812]: I0216 13:36:18.945601 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 16 13:36:18 crc kubenswrapper[4812]: I0216 13:36:18.988678 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 13:36:19 crc kubenswrapper[4812]: I0216 13:36:19.193347 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 13:36:19 crc kubenswrapper[4812]: I0216 13:36:19.200161 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 13:36:19 crc kubenswrapper[4812]: I0216 13:36:19.234774 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 13:36:19 crc kubenswrapper[4812]: I0216 13:36:19.416809 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 16 13:36:19 crc kubenswrapper[4812]: I0216 13:36:19.491149 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 13:36:19 crc kubenswrapper[4812]: I0216 13:36:19.638317 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 13:36:19 crc kubenswrapper[4812]: I0216 13:36:19.670934 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 13:36:19 crc kubenswrapper[4812]: I0216 13:36:19.758326 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 16 13:36:19 crc kubenswrapper[4812]: I0216 13:36:19.888271 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 13:36:19 crc kubenswrapper[4812]: I0216 13:36:19.895664 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 16 13:36:19 crc kubenswrapper[4812]: I0216 13:36:19.923684 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 13:36:19 crc kubenswrapper[4812]: I0216 13:36:19.985574 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 16 13:36:20 crc kubenswrapper[4812]: I0216 13:36:20.041203 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 13:36:20 crc kubenswrapper[4812]: I0216 13:36:20.049846 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 13:36:20 crc kubenswrapper[4812]: I0216 13:36:20.228704 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 13:36:20 crc kubenswrapper[4812]: I0216 13:36:20.267760 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 13:36:20 crc kubenswrapper[4812]: I0216 13:36:20.332421 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 13:36:20 crc kubenswrapper[4812]: I0216 13:36:20.356329 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 13:36:20 crc kubenswrapper[4812]: I0216 13:36:20.429088 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 13:36:20 crc kubenswrapper[4812]: I0216 13:36:20.574259 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 16 13:36:20 crc kubenswrapper[4812]: I0216 13:36:20.672969 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 13:36:20 crc kubenswrapper[4812]: I0216 13:36:20.742554 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 13:36:20 crc kubenswrapper[4812]: I0216 13:36:20.754123 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 13:36:20 crc kubenswrapper[4812]: I0216 13:36:20.782119 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 13:36:20 crc kubenswrapper[4812]: I0216 13:36:20.812720 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 13:36:20 crc kubenswrapper[4812]: I0216 13:36:20.873676 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 13:36:20 crc kubenswrapper[4812]: I0216 13:36:20.930014 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 13:36:20 crc kubenswrapper[4812]: I0216 13:36:20.982496 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 13:36:21 crc kubenswrapper[4812]: I0216 13:36:21.031773 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 13:36:21 crc kubenswrapper[4812]: I0216 13:36:21.049164 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 13:36:21 crc kubenswrapper[4812]: I0216 13:36:21.119824 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 16 13:36:21 crc kubenswrapper[4812]: I0216 13:36:21.134399 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 13:36:21 crc kubenswrapper[4812]: I0216 13:36:21.152353 4812 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 13:36:21 crc kubenswrapper[4812]: I0216 13:36:21.234821 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 13:36:21 crc kubenswrapper[4812]: I0216 13:36:21.255835 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 13:36:21 crc kubenswrapper[4812]: I0216 13:36:21.287666 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 13:36:21 crc kubenswrapper[4812]: I0216 13:36:21.309594 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 16 13:36:21 crc kubenswrapper[4812]: I0216 13:36:21.313668 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 13:36:21 crc kubenswrapper[4812]: I0216 13:36:21.316747 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 16 13:36:21 crc kubenswrapper[4812]: I0216 13:36:21.324200 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 13:36:21 crc kubenswrapper[4812]: I0216 13:36:21.389677 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 13:36:21 crc kubenswrapper[4812]: I0216 13:36:21.502633 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 13:36:21 crc kubenswrapper[4812]: I0216 13:36:21.583249 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 13:36:21 crc kubenswrapper[4812]: I0216 13:36:21.604673 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 13:36:21 crc kubenswrapper[4812]: I0216 13:36:21.667118 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 13:36:21 crc kubenswrapper[4812]: I0216 13:36:21.691853 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 16 13:36:21 crc kubenswrapper[4812]: I0216 13:36:21.697534 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 13:36:21 crc kubenswrapper[4812]: I0216 13:36:21.748574 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 16 13:36:21 crc kubenswrapper[4812]: I0216 13:36:21.798506 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 16 13:36:21 crc kubenswrapper[4812]: I0216 13:36:21.886135 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 13:36:21 crc kubenswrapper[4812]: I0216 13:36:21.936484 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 13:36:21 crc kubenswrapper[4812]: I0216 13:36:21.959772 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 13:36:21 crc kubenswrapper[4812]: I0216 13:36:21.969709 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 16 13:36:22 crc kubenswrapper[4812]: I0216 13:36:22.048148 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 13:36:22 crc kubenswrapper[4812]: I0216 13:36:22.054008 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 13:36:22 crc kubenswrapper[4812]: I0216 13:36:22.110418 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 13:36:22 crc kubenswrapper[4812]: I0216 13:36:22.192284 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 13:36:22 crc kubenswrapper[4812]: I0216 13:36:22.194415 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 16 13:36:22 crc kubenswrapper[4812]: I0216 13:36:22.211836 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 13:36:22 crc kubenswrapper[4812]: I0216 13:36:22.401525 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 16 13:36:22 crc kubenswrapper[4812]: I0216 13:36:22.461486 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 13:36:22 crc kubenswrapper[4812]: I0216 13:36:22.467270 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 13:36:22 crc kubenswrapper[4812]: I0216 13:36:22.468562 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 13:36:22 crc kubenswrapper[4812]: I0216 13:36:22.487686 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 16 13:36:22 crc kubenswrapper[4812]: I0216 13:36:22.488921 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 16 13:36:22 crc kubenswrapper[4812]: I0216 13:36:22.577181 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 13:36:22 crc kubenswrapper[4812]: I0216 13:36:22.623660 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 13:36:22 crc kubenswrapper[4812]: I0216 13:36:22.689519 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 13:36:22 crc kubenswrapper[4812]: I0216 13:36:22.704674 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 13:36:22 crc kubenswrapper[4812]: I0216 13:36:22.741841 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 16 13:36:22 crc kubenswrapper[4812]: I0216 13:36:22.746484 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 16 13:36:22 crc kubenswrapper[4812]: I0216 13:36:22.746963 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 16 13:36:22 crc kubenswrapper[4812]: I0216 13:36:22.785636 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 13:36:22 crc kubenswrapper[4812]: I0216 13:36:22.820961 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 13:36:22 crc kubenswrapper[4812]: I0216 13:36:22.880350 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 16 13:36:22 crc kubenswrapper[4812]: I0216 13:36:22.891921 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 13:36:22 crc kubenswrapper[4812]: I0216 13:36:22.982721 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 16 13:36:23 crc kubenswrapper[4812]: I0216 13:36:23.051832 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 13:36:23 crc kubenswrapper[4812]: I0216 13:36:23.061148 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 13:36:23 crc kubenswrapper[4812]: I0216 13:36:23.085819 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 13:36:23 crc kubenswrapper[4812]: I0216 13:36:23.128633 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 13:36:23 crc kubenswrapper[4812]: I0216 13:36:23.181353 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 13:36:23 crc kubenswrapper[4812]: I0216 13:36:23.204805 4812 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 13:36:23 crc kubenswrapper[4812]: I0216 13:36:23.206060 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 13:36:23 crc kubenswrapper[4812]: I0216 13:36:23.255251 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 13:36:23 crc kubenswrapper[4812]: I0216 13:36:23.309666 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 16 13:36:23 crc kubenswrapper[4812]: I0216 13:36:23.341947 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 13:36:23 crc kubenswrapper[4812]: I0216 13:36:23.366230 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 13:36:23 crc kubenswrapper[4812]: I0216 13:36:23.377638 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 13:36:23 crc kubenswrapper[4812]: I0216 13:36:23.388417 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 13:36:23 crc kubenswrapper[4812]: I0216 13:36:23.408626 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 13:36:23 crc kubenswrapper[4812]: I0216 13:36:23.509402 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 13:36:23 crc kubenswrapper[4812]: I0216 13:36:23.571427 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 13:36:23 crc kubenswrapper[4812]: I0216 13:36:23.609048 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 16 13:36:23 crc kubenswrapper[4812]: I0216 13:36:23.638611 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 13:36:23 crc kubenswrapper[4812]: I0216 13:36:23.660898 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 13:36:23 crc kubenswrapper[4812]: I0216 13:36:23.714186 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 13:36:23 crc kubenswrapper[4812]: I0216 13:36:23.782730 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 13:36:23 crc kubenswrapper[4812]: I0216 13:36:23.862037 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 13:36:23 crc kubenswrapper[4812]: I0216 13:36:23.898336 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 16 13:36:24 crc kubenswrapper[4812]: I0216 13:36:24.022780 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 16 13:36:24 crc kubenswrapper[4812]: I0216 13:36:24.073728 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 16 13:36:24 crc kubenswrapper[4812]: I0216 13:36:24.186719 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 13:36:24 crc kubenswrapper[4812]: I0216 13:36:24.194627 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 16 13:36:24 crc kubenswrapper[4812]: I0216 13:36:24.203789 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 13:36:24 crc kubenswrapper[4812]: I0216 13:36:24.250510 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 13:36:24 crc kubenswrapper[4812]: I0216 13:36:24.316926 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 13:36:24 crc kubenswrapper[4812]: I0216 13:36:24.390508 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 16 13:36:24 crc kubenswrapper[4812]: I0216 13:36:24.401940 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 13:36:24 crc kubenswrapper[4812]: I0216 13:36:24.420592 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 13:36:24 crc kubenswrapper[4812]: I0216 13:36:24.420744 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 16 13:36:24 crc kubenswrapper[4812]: I0216 13:36:24.509098 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 13:36:24 crc kubenswrapper[4812]: I0216 13:36:24.651504 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 13:36:24 crc kubenswrapper[4812]: I0216 13:36:24.672047 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 13:36:24 crc kubenswrapper[4812]: I0216 13:36:24.734433 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 13:36:24 crc kubenswrapper[4812]: I0216 13:36:24.885045 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 16 13:36:24 crc kubenswrapper[4812]: I0216 13:36:24.901894 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 13:36:24 crc kubenswrapper[4812]: I0216 13:36:24.990628 4812 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 13:36:25 crc kubenswrapper[4812]: I0216 13:36:25.025547 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 16 13:36:25 crc kubenswrapper[4812]: I0216 13:36:25.042167 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 13:36:25 crc kubenswrapper[4812]: I0216 13:36:25.049146 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 13:36:25 crc kubenswrapper[4812]: I0216 13:36:25.078681 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 13:36:25 crc kubenswrapper[4812]: I0216 13:36:25.150716 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 13:36:25 crc kubenswrapper[4812]: I0216 13:36:25.194882 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 13:36:25 crc kubenswrapper[4812]: I0216 13:36:25.207723 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 13:36:25 crc kubenswrapper[4812]: I0216 13:36:25.217027 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 13:36:25 crc kubenswrapper[4812]: I0216 13:36:25.301629 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 13:36:25 crc kubenswrapper[4812]: I0216 13:36:25.310604 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 13:36:25 crc kubenswrapper[4812]: I0216 13:36:25.334974 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 16 13:36:25 crc kubenswrapper[4812]: I0216 13:36:25.442489 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 16 13:36:25 crc kubenswrapper[4812]: I0216 13:36:25.460434 4812 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 16 13:36:25 crc kubenswrapper[4812]: I0216 13:36:25.681130 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 13:36:25 crc kubenswrapper[4812]: I0216 13:36:25.697490 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 13:36:25 crc kubenswrapper[4812]: I0216 13:36:25.723477 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 16 13:36:25 crc kubenswrapper[4812]: I0216 13:36:25.776084 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 13:36:25 crc kubenswrapper[4812]: I0216 13:36:25.796541 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 13:36:25 crc kubenswrapper[4812]: I0216 13:36:25.798754 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 13:36:25 crc kubenswrapper[4812]: I0216 13:36:25.799825 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 13:36:25 crc kubenswrapper[4812]: I0216 13:36:25.834514 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 13:36:25 crc kubenswrapper[4812]: I0216 13:36:25.867076 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 13:36:25 crc kubenswrapper[4812]: I0216 13:36:25.997989 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 13:36:26 crc kubenswrapper[4812]: I0216 13:36:26.034302 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 16 13:36:26 crc kubenswrapper[4812]: I0216 13:36:26.113872 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 13:36:26 crc kubenswrapper[4812]: I0216 13:36:26.122922 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 13:36:26 crc kubenswrapper[4812]: I0216 13:36:26.163066 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 13:36:26 crc kubenswrapper[4812]: I0216 13:36:26.278243 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 13:36:26 crc kubenswrapper[4812]: I0216 13:36:26.299272 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 13:36:26 crc kubenswrapper[4812]: I0216 13:36:26.442849 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 13:36:26 crc kubenswrapper[4812]: I0216 13:36:26.502383 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 13:36:26 crc kubenswrapper[4812]: I0216 13:36:26.507770 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 13:36:26 crc kubenswrapper[4812]: I0216 13:36:26.553765 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 16 13:36:26 crc kubenswrapper[4812]: I0216 13:36:26.556477 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 13:36:26 crc kubenswrapper[4812]: I0216 13:36:26.595281 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 13:36:26 crc kubenswrapper[4812]: I0216 13:36:26.620419 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 13:36:26 crc kubenswrapper[4812]: I0216 13:36:26.673918 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 16 13:36:26 crc kubenswrapper[4812]: I0216 13:36:26.735953 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 16 13:36:26 crc kubenswrapper[4812]: I0216 13:36:26.753319 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 13:36:26 crc kubenswrapper[4812]: I0216 13:36:26.798166 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 16 13:36:26 crc kubenswrapper[4812]: I0216 13:36:26.817599 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 13:36:26 crc kubenswrapper[4812]: I0216 13:36:26.960794 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 13:36:27 crc kubenswrapper[4812]: I0216 13:36:27.055542 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 13:36:27 crc kubenswrapper[4812]: I0216 13:36:27.068772 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 13:36:27 crc kubenswrapper[4812]: I0216 13:36:27.100288 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 16 13:36:27 crc kubenswrapper[4812]: I0216 13:36:27.128813 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 13:36:27 crc kubenswrapper[4812]: I0216 13:36:27.177484 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 13:36:27 crc kubenswrapper[4812]: I0216 13:36:27.180426 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 16 13:36:27 crc kubenswrapper[4812]: I0216 13:36:27.209929 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 13:36:27 crc kubenswrapper[4812]: I0216 13:36:27.220464 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 13:36:27 crc kubenswrapper[4812]: I0216 13:36:27.256299 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 13:36:27 crc kubenswrapper[4812]: I0216 13:36:27.295707 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 16 13:36:27 crc kubenswrapper[4812]: I0216 13:36:27.313008 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 13:36:27 crc kubenswrapper[4812]: I0216 13:36:27.332400 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 13:36:27 crc kubenswrapper[4812]: I0216 13:36:27.385545 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 16 13:36:27 crc kubenswrapper[4812]: I0216 13:36:27.422308 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 13:36:27 crc kubenswrapper[4812]: I0216 13:36:27.452238 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 13:36:27 crc kubenswrapper[4812]: I0216 13:36:27.515407 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 16 13:36:27 crc kubenswrapper[4812]: I0216 13:36:27.537324 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 16 13:36:27 crc kubenswrapper[4812]: I0216 13:36:27.555746 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 16 13:36:27 crc kubenswrapper[4812]: I0216 13:36:27.581382 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 13:36:27 crc kubenswrapper[4812]: I0216 13:36:27.665610 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 16 13:36:27 crc kubenswrapper[4812]: I0216 13:36:27.816343 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 13:36:27 crc kubenswrapper[4812]: I0216 13:36:27.927809 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 13:36:28 crc kubenswrapper[4812]: I0216 13:36:28.038555 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 13:36:28 crc kubenswrapper[4812]: I0216 13:36:28.056139 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 13:36:28 crc kubenswrapper[4812]: I0216 13:36:28.069535 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 16 13:36:28 crc kubenswrapper[4812]: I0216 13:36:28.192266 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 13:36:28 crc kubenswrapper[4812]: I0216 13:36:28.231475 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 16 13:36:28 crc kubenswrapper[4812]: I0216 13:36:28.494182 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 13:36:28 crc kubenswrapper[4812]: I0216 13:36:28.504852 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 16 13:36:28 crc kubenswrapper[4812]: I0216 13:36:28.779952 4812 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 13:36:28 crc kubenswrapper[4812]: I0216 13:36:28.783360 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=45.783332762 podStartE2EDuration="45.783332762s" podCreationTimestamp="2026-02-16 13:35:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:36:02.986491898 +0000 UTC m=+252.050822609" watchObservedRunningTime="2026-02-16 13:36:28.783332762 +0000 UTC m=+277.847663463" Feb 16 13:36:28 crc kubenswrapper[4812]: I0216 13:36:28.784817 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 13:36:28 crc kubenswrapper[4812]: I0216 13:36:28.784862 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 13:36:28 crc kubenswrapper[4812]: I0216 13:36:28.790983 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 13:36:28 crc kubenswrapper[4812]: I0216 13:36:28.811345 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=26.811315305 podStartE2EDuration="26.811315305s" podCreationTimestamp="2026-02-16 13:36:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:36:28.809745237 +0000 UTC m=+277.874075938" watchObservedRunningTime="2026-02-16 13:36:28.811315305 +0000 UTC m=+277.875646006" Feb 16 13:36:28 crc kubenswrapper[4812]: I0216 13:36:28.856293 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 16 13:36:28 crc kubenswrapper[4812]: I0216 13:36:28.865054 4812 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 13:36:28 crc kubenswrapper[4812]: I0216 13:36:28.895086 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 13:36:28 crc kubenswrapper[4812]: I0216 13:36:28.903286 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 16 13:36:29 crc kubenswrapper[4812]: I0216 13:36:29.069384 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 13:36:29 crc kubenswrapper[4812]: I0216 13:36:29.094527 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 13:36:29 crc kubenswrapper[4812]: I0216 13:36:29.097196 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 13:36:29 crc kubenswrapper[4812]: I0216 13:36:29.478412 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 13:36:29 crc kubenswrapper[4812]: I0216 13:36:29.644179 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 13:36:29 crc kubenswrapper[4812]: I0216 13:36:29.953112 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 13:36:30 crc kubenswrapper[4812]: I0216 13:36:30.390210 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 16 13:36:30 crc kubenswrapper[4812]: I0216 13:36:30.774517 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 13:36:30 crc kubenswrapper[4812]: I0216 13:36:30.824041 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 13:36:31 crc kubenswrapper[4812]: I0216 13:36:31.611057 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 13:36:31 crc kubenswrapper[4812]: I0216 13:36:31.830145 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 13:36:32 crc kubenswrapper[4812]: I0216 13:36:32.476309 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 13:36:33 crc kubenswrapper[4812]: I0216 13:36:33.249269 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 16 13:36:35 crc kubenswrapper[4812]: I0216 13:36:35.682146 4812 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 13:36:35 crc kubenswrapper[4812]: I0216 13:36:35.682509 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://66d8cb48d213bcf59ce26660afb8b507b8f3ad2d1329acf007b1d8f94715f0b9" gracePeriod=5 Feb 16 13:36:41 crc kubenswrapper[4812]: I0216 13:36:41.157478 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 16 13:36:41 crc kubenswrapper[4812]: I0216 13:36:41.157704 4812 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="66d8cb48d213bcf59ce26660afb8b507b8f3ad2d1329acf007b1d8f94715f0b9" exitCode=137 Feb 16 13:36:41 crc kubenswrapper[4812]: I0216 13:36:41.281351 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 16 13:36:41 crc kubenswrapper[4812]: I0216 13:36:41.281421 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 13:36:41 crc kubenswrapper[4812]: I0216 13:36:41.400484 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 13:36:41 crc kubenswrapper[4812]: I0216 13:36:41.400552 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 13:36:41 crc kubenswrapper[4812]: I0216 13:36:41.400619 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 13:36:41 crc kubenswrapper[4812]: I0216 13:36:41.400638 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 13:36:41 crc kubenswrapper[4812]: I0216 13:36:41.400660 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 13:36:41 crc kubenswrapper[4812]: I0216 13:36:41.400690 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:36:41 crc kubenswrapper[4812]: I0216 13:36:41.400775 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:36:41 crc kubenswrapper[4812]: I0216 13:36:41.400843 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:36:41 crc kubenswrapper[4812]: I0216 13:36:41.400963 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:36:41 crc kubenswrapper[4812]: I0216 13:36:41.401077 4812 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 16 13:36:41 crc kubenswrapper[4812]: I0216 13:36:41.401108 4812 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 13:36:41 crc kubenswrapper[4812]: I0216 13:36:41.401133 4812 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 16 13:36:41 crc kubenswrapper[4812]: I0216 13:36:41.414018 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:36:41 crc kubenswrapper[4812]: I0216 13:36:41.502681 4812 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 16 13:36:41 crc kubenswrapper[4812]: I0216 13:36:41.502715 4812 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 13:36:41 crc kubenswrapper[4812]: I0216 13:36:41.892060 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 16 13:36:41 crc kubenswrapper[4812]: I0216 13:36:41.892390 4812 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 16 13:36:41 crc kubenswrapper[4812]: I0216 13:36:41.904017 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 13:36:41 crc kubenswrapper[4812]: I0216 13:36:41.904091 4812 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="33c7c522-0d0d-4bf2-b723-3dbecbcd3496" Feb 16 13:36:41 crc kubenswrapper[4812]: I0216 13:36:41.907776 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 13:36:41 crc kubenswrapper[4812]: I0216 13:36:41.907825 4812 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="33c7c522-0d0d-4bf2-b723-3dbecbcd3496" Feb 16 13:36:42 crc kubenswrapper[4812]: I0216 13:36:42.164131 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 16 13:36:42 crc kubenswrapper[4812]: I0216 13:36:42.164222 4812 scope.go:117] "RemoveContainer" containerID="66d8cb48d213bcf59ce26660afb8b507b8f3ad2d1329acf007b1d8f94715f0b9" Feb 16 13:36:42 crc kubenswrapper[4812]: I0216 13:36:42.164271 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 13:36:51 crc kubenswrapper[4812]: I0216 13:36:51.672416 4812 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 16 13:37:02 crc kubenswrapper[4812]: I0216 13:37:02.670239 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-82xg7"] Feb 16 13:37:02 crc kubenswrapper[4812]: E0216 13:37:02.670953 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 13:37:02 crc kubenswrapper[4812]: I0216 13:37:02.670966 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 13:37:02 crc kubenswrapper[4812]: E0216 13:37:02.670998 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4158c95-a923-4240-a8bc-f9c44270275e" containerName="installer" Feb 16 13:37:02 crc kubenswrapper[4812]: I0216 13:37:02.671004 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4158c95-a923-4240-a8bc-f9c44270275e" containerName="installer" Feb 16 13:37:02 crc kubenswrapper[4812]: I0216 13:37:02.671098 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4158c95-a923-4240-a8bc-f9c44270275e" containerName="installer" Feb 16 13:37:02 crc kubenswrapper[4812]: I0216 13:37:02.671107 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 13:37:02 crc kubenswrapper[4812]: I0216 13:37:02.671581 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-82xg7" Feb 16 13:37:02 crc kubenswrapper[4812]: I0216 13:37:02.720108 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-82xg7"] Feb 16 13:37:02 crc kubenswrapper[4812]: I0216 13:37:02.793203 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmm8g\" (UniqueName: \"kubernetes.io/projected/32da75af-c4c6-41a5-9b1f-bc764dcec325-kube-api-access-tmm8g\") pod \"image-registry-66df7c8f76-82xg7\" (UID: \"32da75af-c4c6-41a5-9b1f-bc764dcec325\") " pod="openshift-image-registry/image-registry-66df7c8f76-82xg7" Feb 16 13:37:02 crc kubenswrapper[4812]: I0216 13:37:02.793275 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/32da75af-c4c6-41a5-9b1f-bc764dcec325-ca-trust-extracted\") pod \"image-registry-66df7c8f76-82xg7\" (UID: \"32da75af-c4c6-41a5-9b1f-bc764dcec325\") " pod="openshift-image-registry/image-registry-66df7c8f76-82xg7" Feb 16 13:37:02 crc kubenswrapper[4812]: I0216 13:37:02.793307 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/32da75af-c4c6-41a5-9b1f-bc764dcec325-registry-tls\") pod \"image-registry-66df7c8f76-82xg7\" (UID: \"32da75af-c4c6-41a5-9b1f-bc764dcec325\") " pod="openshift-image-registry/image-registry-66df7c8f76-82xg7" Feb 16 13:37:02 crc kubenswrapper[4812]: I0216 13:37:02.793323 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/32da75af-c4c6-41a5-9b1f-bc764dcec325-trusted-ca\") pod \"image-registry-66df7c8f76-82xg7\" (UID: \"32da75af-c4c6-41a5-9b1f-bc764dcec325\") " pod="openshift-image-registry/image-registry-66df7c8f76-82xg7" Feb 16 13:37:02 crc kubenswrapper[4812]: I0216 13:37:02.793349 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/32da75af-c4c6-41a5-9b1f-bc764dcec325-installation-pull-secrets\") pod \"image-registry-66df7c8f76-82xg7\" (UID: \"32da75af-c4c6-41a5-9b1f-bc764dcec325\") " pod="openshift-image-registry/image-registry-66df7c8f76-82xg7" Feb 16 13:37:02 crc kubenswrapper[4812]: I0216 13:37:02.793367 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/32da75af-c4c6-41a5-9b1f-bc764dcec325-bound-sa-token\") pod \"image-registry-66df7c8f76-82xg7\" (UID: \"32da75af-c4c6-41a5-9b1f-bc764dcec325\") " pod="openshift-image-registry/image-registry-66df7c8f76-82xg7" Feb 16 13:37:02 crc kubenswrapper[4812]: I0216 13:37:02.793541 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-82xg7\" (UID: \"32da75af-c4c6-41a5-9b1f-bc764dcec325\") " pod="openshift-image-registry/image-registry-66df7c8f76-82xg7" Feb 16 13:37:02 crc kubenswrapper[4812]: I0216 13:37:02.793581 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/32da75af-c4c6-41a5-9b1f-bc764dcec325-registry-certificates\") pod \"image-registry-66df7c8f76-82xg7\" (UID: \"32da75af-c4c6-41a5-9b1f-bc764dcec325\") " pod="openshift-image-registry/image-registry-66df7c8f76-82xg7" Feb 16 13:37:02 crc kubenswrapper[4812]: I0216 13:37:02.812647 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-82xg7\" (UID: \"32da75af-c4c6-41a5-9b1f-bc764dcec325\") " pod="openshift-image-registry/image-registry-66df7c8f76-82xg7" Feb 16 13:37:02 crc kubenswrapper[4812]: I0216 13:37:02.895195 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/32da75af-c4c6-41a5-9b1f-bc764dcec325-installation-pull-secrets\") pod \"image-registry-66df7c8f76-82xg7\" (UID: \"32da75af-c4c6-41a5-9b1f-bc764dcec325\") " pod="openshift-image-registry/image-registry-66df7c8f76-82xg7" Feb 16 13:37:02 crc kubenswrapper[4812]: I0216 13:37:02.895250 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/32da75af-c4c6-41a5-9b1f-bc764dcec325-bound-sa-token\") pod \"image-registry-66df7c8f76-82xg7\" (UID: \"32da75af-c4c6-41a5-9b1f-bc764dcec325\") " pod="openshift-image-registry/image-registry-66df7c8f76-82xg7" Feb 16 13:37:02 crc kubenswrapper[4812]: I0216 13:37:02.895352 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/32da75af-c4c6-41a5-9b1f-bc764dcec325-registry-certificates\") pod \"image-registry-66df7c8f76-82xg7\" (UID: \"32da75af-c4c6-41a5-9b1f-bc764dcec325\") " pod="openshift-image-registry/image-registry-66df7c8f76-82xg7" Feb 16 13:37:02 crc kubenswrapper[4812]: I0216 13:37:02.895396 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmm8g\" (UniqueName: \"kubernetes.io/projected/32da75af-c4c6-41a5-9b1f-bc764dcec325-kube-api-access-tmm8g\") pod \"image-registry-66df7c8f76-82xg7\" (UID: \"32da75af-c4c6-41a5-9b1f-bc764dcec325\") " pod="openshift-image-registry/image-registry-66df7c8f76-82xg7" Feb 16 13:37:02 crc kubenswrapper[4812]: I0216 13:37:02.895424 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/32da75af-c4c6-41a5-9b1f-bc764dcec325-ca-trust-extracted\") pod \"image-registry-66df7c8f76-82xg7\" (UID: \"32da75af-c4c6-41a5-9b1f-bc764dcec325\") " pod="openshift-image-registry/image-registry-66df7c8f76-82xg7" Feb 16 13:37:02 crc kubenswrapper[4812]: I0216 13:37:02.895486 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/32da75af-c4c6-41a5-9b1f-bc764dcec325-registry-tls\") pod \"image-registry-66df7c8f76-82xg7\" (UID: \"32da75af-c4c6-41a5-9b1f-bc764dcec325\") " pod="openshift-image-registry/image-registry-66df7c8f76-82xg7" Feb 16 13:37:02 crc kubenswrapper[4812]: I0216 13:37:02.895530 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/32da75af-c4c6-41a5-9b1f-bc764dcec325-trusted-ca\") pod \"image-registry-66df7c8f76-82xg7\" (UID: \"32da75af-c4c6-41a5-9b1f-bc764dcec325\") " pod="openshift-image-registry/image-registry-66df7c8f76-82xg7" Feb 16 13:37:02 crc kubenswrapper[4812]: I0216 13:37:02.896111 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/32da75af-c4c6-41a5-9b1f-bc764dcec325-ca-trust-extracted\") pod \"image-registry-66df7c8f76-82xg7\" (UID: \"32da75af-c4c6-41a5-9b1f-bc764dcec325\") " pod="openshift-image-registry/image-registry-66df7c8f76-82xg7" Feb 16 13:37:02 crc kubenswrapper[4812]: I0216 13:37:02.897206 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/32da75af-c4c6-41a5-9b1f-bc764dcec325-trusted-ca\") pod \"image-registry-66df7c8f76-82xg7\" (UID: \"32da75af-c4c6-41a5-9b1f-bc764dcec325\") " pod="openshift-image-registry/image-registry-66df7c8f76-82xg7" Feb 16 13:37:02 crc kubenswrapper[4812]: I0216 13:37:02.897999 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/32da75af-c4c6-41a5-9b1f-bc764dcec325-registry-certificates\") pod \"image-registry-66df7c8f76-82xg7\" (UID: \"32da75af-c4c6-41a5-9b1f-bc764dcec325\") " pod="openshift-image-registry/image-registry-66df7c8f76-82xg7" Feb 16 13:37:02 crc kubenswrapper[4812]: I0216 13:37:02.901623 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/32da75af-c4c6-41a5-9b1f-bc764dcec325-installation-pull-secrets\") pod \"image-registry-66df7c8f76-82xg7\" (UID: \"32da75af-c4c6-41a5-9b1f-bc764dcec325\") " pod="openshift-image-registry/image-registry-66df7c8f76-82xg7" Feb 16 13:37:02 crc kubenswrapper[4812]: I0216 13:37:02.903227 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/32da75af-c4c6-41a5-9b1f-bc764dcec325-registry-tls\") pod \"image-registry-66df7c8f76-82xg7\" (UID: \"32da75af-c4c6-41a5-9b1f-bc764dcec325\") " pod="openshift-image-registry/image-registry-66df7c8f76-82xg7" Feb 16 13:37:02 crc kubenswrapper[4812]: I0216 13:37:02.912466 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/32da75af-c4c6-41a5-9b1f-bc764dcec325-bound-sa-token\") pod \"image-registry-66df7c8f76-82xg7\" (UID: \"32da75af-c4c6-41a5-9b1f-bc764dcec325\") " pod="openshift-image-registry/image-registry-66df7c8f76-82xg7" Feb 16 13:37:02 crc kubenswrapper[4812]: I0216 13:37:02.912672 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmm8g\" (UniqueName: \"kubernetes.io/projected/32da75af-c4c6-41a5-9b1f-bc764dcec325-kube-api-access-tmm8g\") pod \"image-registry-66df7c8f76-82xg7\" (UID: \"32da75af-c4c6-41a5-9b1f-bc764dcec325\") " pod="openshift-image-registry/image-registry-66df7c8f76-82xg7" Feb 16 13:37:02 crc kubenswrapper[4812]: I0216 13:37:02.995123 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-82xg7" Feb 16 13:37:03 crc kubenswrapper[4812]: I0216 13:37:03.375133 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-82xg7"] Feb 16 13:37:04 crc kubenswrapper[4812]: I0216 13:37:04.290598 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-82xg7" event={"ID":"32da75af-c4c6-41a5-9b1f-bc764dcec325","Type":"ContainerStarted","Data":"99280104386f05d8c0538ebbdcd5f46ede55052dc78b089a84281a5893a5f30d"} Feb 16 13:37:04 crc kubenswrapper[4812]: I0216 13:37:04.291057 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-82xg7" Feb 16 13:37:04 crc kubenswrapper[4812]: I0216 13:37:04.291077 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-82xg7" event={"ID":"32da75af-c4c6-41a5-9b1f-bc764dcec325","Type":"ContainerStarted","Data":"a03bb6a1425eb1de0818b263adb2517f0160ae207e40c7753dbf5073fd26743f"} Feb 16 13:37:04 crc kubenswrapper[4812]: I0216 13:37:04.313125 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-82xg7" podStartSLOduration=2.313107365 podStartE2EDuration="2.313107365s" podCreationTimestamp="2026-02-16 13:37:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:37:04.3096115 +0000 UTC m=+313.373942231" watchObservedRunningTime="2026-02-16 13:37:04.313107365 +0000 UTC m=+313.377438066" Feb 16 13:37:23 crc kubenswrapper[4812]: I0216 13:37:23.010735 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-82xg7" Feb 16 13:37:23 crc kubenswrapper[4812]: I0216 13:37:23.070664 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2f89v"] Feb 16 13:37:44 crc kubenswrapper[4812]: I0216 13:37:44.549744 4812 patch_prober.go:28] interesting pod/machine-config-daemon-c6mn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:37:44 crc kubenswrapper[4812]: I0216 13:37:44.550669 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:37:48 crc kubenswrapper[4812]: I0216 13:37:48.117686 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" podUID="f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7" containerName="registry" containerID="cri-o://87d76816501304b9e257e87bb1166d8efd9fcc4f3c20de53d92b4ce0c88fc651" gracePeriod=30 Feb 16 13:37:48 crc kubenswrapper[4812]: I0216 13:37:48.452216 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:37:48 crc kubenswrapper[4812]: I0216 13:37:48.533568 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7-installation-pull-secrets\") pod \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " Feb 16 13:37:48 crc kubenswrapper[4812]: I0216 13:37:48.533850 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " Feb 16 13:37:48 crc kubenswrapper[4812]: I0216 13:37:48.533895 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8258\" (UniqueName: \"kubernetes.io/projected/f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7-kube-api-access-p8258\") pod \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " Feb 16 13:37:48 crc kubenswrapper[4812]: I0216 13:37:48.533922 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7-bound-sa-token\") pod \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " Feb 16 13:37:48 crc kubenswrapper[4812]: I0216 13:37:48.533986 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7-trusted-ca\") pod \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " Feb 16 13:37:48 crc kubenswrapper[4812]: I0216 13:37:48.534008 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7-registry-tls\") pod \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " Feb 16 13:37:48 crc kubenswrapper[4812]: I0216 13:37:48.534035 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7-ca-trust-extracted\") pod \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " Feb 16 13:37:48 crc kubenswrapper[4812]: I0216 13:37:48.534077 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7-registry-certificates\") pod \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\" (UID: \"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7\") " Feb 16 13:37:48 crc kubenswrapper[4812]: I0216 13:37:48.534727 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:37:48 crc kubenswrapper[4812]: I0216 13:37:48.534986 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:37:48 crc kubenswrapper[4812]: I0216 13:37:48.540959 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:37:48 crc kubenswrapper[4812]: I0216 13:37:48.541258 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7-kube-api-access-p8258" (OuterVolumeSpecName: "kube-api-access-p8258") pod "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7"). InnerVolumeSpecName "kube-api-access-p8258". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:37:48 crc kubenswrapper[4812]: I0216 13:37:48.541837 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:37:48 crc kubenswrapper[4812]: I0216 13:37:48.541970 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:37:48 crc kubenswrapper[4812]: I0216 13:37:48.546763 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 13:37:48 crc kubenswrapper[4812]: I0216 13:37:48.551408 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7" (UID: "f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:37:48 crc kubenswrapper[4812]: I0216 13:37:48.635821 4812 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 13:37:48 crc kubenswrapper[4812]: I0216 13:37:48.635851 4812 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 16 13:37:48 crc kubenswrapper[4812]: I0216 13:37:48.635864 4812 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 16 13:37:48 crc kubenswrapper[4812]: I0216 13:37:48.635874 4812 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 16 13:37:48 crc kubenswrapper[4812]: I0216 13:37:48.635883 4812 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 16 13:37:48 crc kubenswrapper[4812]: I0216 13:37:48.635892 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8258\" (UniqueName: \"kubernetes.io/projected/f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7-kube-api-access-p8258\") on node \"crc\" DevicePath \"\"" Feb 16 13:37:48 crc kubenswrapper[4812]: I0216 13:37:48.635901 4812 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 13:37:48 crc kubenswrapper[4812]: I0216 13:37:48.729042 4812 generic.go:334] "Generic (PLEG): container finished" podID="f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7" containerID="87d76816501304b9e257e87bb1166d8efd9fcc4f3c20de53d92b4ce0c88fc651" exitCode=0 Feb 16 13:37:48 crc kubenswrapper[4812]: I0216 13:37:48.729084 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" event={"ID":"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7","Type":"ContainerDied","Data":"87d76816501304b9e257e87bb1166d8efd9fcc4f3c20de53d92b4ce0c88fc651"} Feb 16 13:37:48 crc kubenswrapper[4812]: I0216 13:37:48.729113 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" event={"ID":"f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7","Type":"ContainerDied","Data":"e5bd39cd6b902b0dc8348d4642a3ada4a57dc2b836cfc4cb97213468b0960739"} Feb 16 13:37:48 crc kubenswrapper[4812]: I0216 13:37:48.729129 4812 scope.go:117] "RemoveContainer" containerID="87d76816501304b9e257e87bb1166d8efd9fcc4f3c20de53d92b4ce0c88fc651" Feb 16 13:37:48 crc kubenswrapper[4812]: I0216 13:37:48.729139 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-2f89v" Feb 16 13:37:48 crc kubenswrapper[4812]: I0216 13:37:48.749019 4812 scope.go:117] "RemoveContainer" containerID="87d76816501304b9e257e87bb1166d8efd9fcc4f3c20de53d92b4ce0c88fc651" Feb 16 13:37:48 crc kubenswrapper[4812]: E0216 13:37:48.753945 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87d76816501304b9e257e87bb1166d8efd9fcc4f3c20de53d92b4ce0c88fc651\": container with ID starting with 87d76816501304b9e257e87bb1166d8efd9fcc4f3c20de53d92b4ce0c88fc651 not found: ID does not exist" containerID="87d76816501304b9e257e87bb1166d8efd9fcc4f3c20de53d92b4ce0c88fc651" Feb 16 13:37:48 crc kubenswrapper[4812]: I0216 13:37:48.754055 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87d76816501304b9e257e87bb1166d8efd9fcc4f3c20de53d92b4ce0c88fc651"} err="failed to get container status \"87d76816501304b9e257e87bb1166d8efd9fcc4f3c20de53d92b4ce0c88fc651\": rpc error: code = NotFound desc = could not find container \"87d76816501304b9e257e87bb1166d8efd9fcc4f3c20de53d92b4ce0c88fc651\": container with ID starting with 87d76816501304b9e257e87bb1166d8efd9fcc4f3c20de53d92b4ce0c88fc651 not found: ID does not exist" Feb 16 13:37:48 crc kubenswrapper[4812]: I0216 13:37:48.759715 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2f89v"] Feb 16 13:37:48 crc kubenswrapper[4812]: I0216 13:37:48.763121 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2f89v"] Feb 16 13:37:49 crc kubenswrapper[4812]: I0216 13:37:49.885650 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7" path="/var/lib/kubelet/pods/f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7/volumes" Feb 16 13:38:12 crc kubenswrapper[4812]: I0216 13:38:12.766098 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gfhfv"] Feb 16 13:38:12 crc kubenswrapper[4812]: I0216 13:38:12.766942 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gfhfv" podUID="567e2fcc-e342-41e9-a406-4758f7c5551e" containerName="registry-server" containerID="cri-o://c5bb604f6590b8644e494f5182c4c1c2ba349637a33d8ed037a109968ac8efc3" gracePeriod=30 Feb 16 13:38:12 crc kubenswrapper[4812]: I0216 13:38:12.772923 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t9zmh"] Feb 16 13:38:12 crc kubenswrapper[4812]: I0216 13:38:12.773226 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-t9zmh" podUID="2984d252-d29e-49b5-87ed-9ce7d19edc6d" containerName="registry-server" containerID="cri-o://d9a430ff631e78c5495fe28e0ee2a6e74352dbc43d3c25fa783905541e8adeb8" gracePeriod=30 Feb 16 13:38:12 crc kubenswrapper[4812]: I0216 13:38:12.785787 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kc7dg"] Feb 16 13:38:12 crc kubenswrapper[4812]: I0216 13:38:12.786074 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-kc7dg" podUID="7147a2f9-6f8c-4fa5-b6da-a6a67a53e231" containerName="marketplace-operator" containerID="cri-o://15d1583dafa02efaea6f9092a21449c4f85ba773664eb3dbf582c907b0eb91e7" gracePeriod=30 Feb 16 13:38:12 crc kubenswrapper[4812]: I0216 13:38:12.790277 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cqhcl"] Feb 16 13:38:12 crc kubenswrapper[4812]: I0216 13:38:12.790567 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cqhcl" podUID="a297c2d9-88a8-4019-94f5-c1f5498bee86" containerName="registry-server" containerID="cri-o://4f5aa6f991a550486ac5ebd3aff4ba4ecb90d50f3311f426b006a90935908626" gracePeriod=30 Feb 16 13:38:12 crc kubenswrapper[4812]: I0216 13:38:12.802684 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lm499"] Feb 16 13:38:12 crc kubenswrapper[4812]: E0216 13:38:12.802988 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7" containerName="registry" Feb 16 13:38:12 crc kubenswrapper[4812]: I0216 13:38:12.803008 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7" containerName="registry" Feb 16 13:38:12 crc kubenswrapper[4812]: I0216 13:38:12.803175 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1b2bcd2-dfe9-46c1-b0ee-31f73a548af7" containerName="registry" Feb 16 13:38:12 crc kubenswrapper[4812]: I0216 13:38:12.803732 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lm499" Feb 16 13:38:12 crc kubenswrapper[4812]: I0216 13:38:12.810463 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fjz4f"] Feb 16 13:38:12 crc kubenswrapper[4812]: I0216 13:38:12.810719 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fjz4f" podUID="c1a9695b-636b-4b29-a6dd-4e0708706b74" containerName="registry-server" containerID="cri-o://2375994974e7fd5af92b6e11947c9e40cbeeb5f3774b9a88742bec12ffb25c76" gracePeriod=30 Feb 16 13:38:12 crc kubenswrapper[4812]: I0216 13:38:12.823351 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lm499"] Feb 16 13:38:12 crc kubenswrapper[4812]: I0216 13:38:12.852878 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/62c219b7-14b9-4105-8dcd-195446a4b07d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lm499\" (UID: \"62c219b7-14b9-4105-8dcd-195446a4b07d\") " pod="openshift-marketplace/marketplace-operator-79b997595-lm499" Feb 16 13:38:12 crc kubenswrapper[4812]: I0216 13:38:12.852974 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxczc\" (UniqueName: \"kubernetes.io/projected/62c219b7-14b9-4105-8dcd-195446a4b07d-kube-api-access-mxczc\") pod \"marketplace-operator-79b997595-lm499\" (UID: \"62c219b7-14b9-4105-8dcd-195446a4b07d\") " pod="openshift-marketplace/marketplace-operator-79b997595-lm499" Feb 16 13:38:12 crc kubenswrapper[4812]: I0216 13:38:12.853068 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/62c219b7-14b9-4105-8dcd-195446a4b07d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lm499\" (UID: \"62c219b7-14b9-4105-8dcd-195446a4b07d\") " pod="openshift-marketplace/marketplace-operator-79b997595-lm499" Feb 16 13:38:12 crc kubenswrapper[4812]: I0216 13:38:12.954256 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxczc\" (UniqueName: \"kubernetes.io/projected/62c219b7-14b9-4105-8dcd-195446a4b07d-kube-api-access-mxczc\") pod \"marketplace-operator-79b997595-lm499\" (UID: \"62c219b7-14b9-4105-8dcd-195446a4b07d\") " pod="openshift-marketplace/marketplace-operator-79b997595-lm499" Feb 16 13:38:12 crc kubenswrapper[4812]: I0216 13:38:12.954701 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/62c219b7-14b9-4105-8dcd-195446a4b07d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lm499\" (UID: \"62c219b7-14b9-4105-8dcd-195446a4b07d\") " pod="openshift-marketplace/marketplace-operator-79b997595-lm499" Feb 16 13:38:12 crc kubenswrapper[4812]: I0216 13:38:12.954806 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/62c219b7-14b9-4105-8dcd-195446a4b07d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lm499\" (UID: \"62c219b7-14b9-4105-8dcd-195446a4b07d\") " pod="openshift-marketplace/marketplace-operator-79b997595-lm499" Feb 16 13:38:12 crc kubenswrapper[4812]: I0216 13:38:12.956088 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/62c219b7-14b9-4105-8dcd-195446a4b07d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lm499\" (UID: \"62c219b7-14b9-4105-8dcd-195446a4b07d\") " pod="openshift-marketplace/marketplace-operator-79b997595-lm499" Feb 16 13:38:12 crc kubenswrapper[4812]: I0216 13:38:12.962279 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/62c219b7-14b9-4105-8dcd-195446a4b07d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lm499\" (UID: \"62c219b7-14b9-4105-8dcd-195446a4b07d\") " pod="openshift-marketplace/marketplace-operator-79b997595-lm499" Feb 16 13:38:12 crc kubenswrapper[4812]: I0216 13:38:12.979119 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxczc\" (UniqueName: \"kubernetes.io/projected/62c219b7-14b9-4105-8dcd-195446a4b07d-kube-api-access-mxczc\") pod \"marketplace-operator-79b997595-lm499\" (UID: \"62c219b7-14b9-4105-8dcd-195446a4b07d\") " pod="openshift-marketplace/marketplace-operator-79b997595-lm499" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.127109 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lm499" Feb 16 13:38:13 crc kubenswrapper[4812]: E0216 13:38:13.224153 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4f5aa6f991a550486ac5ebd3aff4ba4ecb90d50f3311f426b006a90935908626 is running failed: container process not found" containerID="4f5aa6f991a550486ac5ebd3aff4ba4ecb90d50f3311f426b006a90935908626" cmd=["grpc_health_probe","-addr=:50051"] Feb 16 13:38:13 crc kubenswrapper[4812]: E0216 13:38:13.224721 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4f5aa6f991a550486ac5ebd3aff4ba4ecb90d50f3311f426b006a90935908626 is running failed: container process not found" containerID="4f5aa6f991a550486ac5ebd3aff4ba4ecb90d50f3311f426b006a90935908626" cmd=["grpc_health_probe","-addr=:50051"] Feb 16 13:38:13 crc kubenswrapper[4812]: E0216 13:38:13.224973 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4f5aa6f991a550486ac5ebd3aff4ba4ecb90d50f3311f426b006a90935908626 is running failed: container process not found" containerID="4f5aa6f991a550486ac5ebd3aff4ba4ecb90d50f3311f426b006a90935908626" cmd=["grpc_health_probe","-addr=:50051"] Feb 16 13:38:13 crc kubenswrapper[4812]: E0216 13:38:13.224999 4812 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4f5aa6f991a550486ac5ebd3aff4ba4ecb90d50f3311f426b006a90935908626 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-cqhcl" podUID="a297c2d9-88a8-4019-94f5-c1f5498bee86" containerName="registry-server" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.261708 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t9zmh" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.264080 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gfhfv" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.274330 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kc7dg" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.282686 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cqhcl" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.289606 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fjz4f" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.360771 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7147a2f9-6f8c-4fa5-b6da-a6a67a53e231-marketplace-operator-metrics\") pod \"7147a2f9-6f8c-4fa5-b6da-a6a67a53e231\" (UID: \"7147a2f9-6f8c-4fa5-b6da-a6a67a53e231\") " Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.360803 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a297c2d9-88a8-4019-94f5-c1f5498bee86-utilities\") pod \"a297c2d9-88a8-4019-94f5-c1f5498bee86\" (UID: \"a297c2d9-88a8-4019-94f5-c1f5498bee86\") " Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.360867 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2984d252-d29e-49b5-87ed-9ce7d19edc6d-utilities\") pod \"2984d252-d29e-49b5-87ed-9ce7d19edc6d\" (UID: \"2984d252-d29e-49b5-87ed-9ce7d19edc6d\") " Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.360887 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a297c2d9-88a8-4019-94f5-c1f5498bee86-catalog-content\") pod \"a297c2d9-88a8-4019-94f5-c1f5498bee86\" (UID: \"a297c2d9-88a8-4019-94f5-c1f5498bee86\") " Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.360901 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1a9695b-636b-4b29-a6dd-4e0708706b74-catalog-content\") pod \"c1a9695b-636b-4b29-a6dd-4e0708706b74\" (UID: \"c1a9695b-636b-4b29-a6dd-4e0708706b74\") " Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.360927 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvng8\" (UniqueName: \"kubernetes.io/projected/2984d252-d29e-49b5-87ed-9ce7d19edc6d-kube-api-access-kvng8\") pod \"2984d252-d29e-49b5-87ed-9ce7d19edc6d\" (UID: \"2984d252-d29e-49b5-87ed-9ce7d19edc6d\") " Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.360950 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7147a2f9-6f8c-4fa5-b6da-a6a67a53e231-marketplace-trusted-ca\") pod \"7147a2f9-6f8c-4fa5-b6da-a6a67a53e231\" (UID: \"7147a2f9-6f8c-4fa5-b6da-a6a67a53e231\") " Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.360967 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4l74\" (UniqueName: \"kubernetes.io/projected/7147a2f9-6f8c-4fa5-b6da-a6a67a53e231-kube-api-access-b4l74\") pod \"7147a2f9-6f8c-4fa5-b6da-a6a67a53e231\" (UID: \"7147a2f9-6f8c-4fa5-b6da-a6a67a53e231\") " Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.360985 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1a9695b-636b-4b29-a6dd-4e0708706b74-utilities\") pod \"c1a9695b-636b-4b29-a6dd-4e0708706b74\" (UID: \"c1a9695b-636b-4b29-a6dd-4e0708706b74\") " Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.361002 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzmm7\" (UniqueName: \"kubernetes.io/projected/567e2fcc-e342-41e9-a406-4758f7c5551e-kube-api-access-rzmm7\") pod \"567e2fcc-e342-41e9-a406-4758f7c5551e\" (UID: \"567e2fcc-e342-41e9-a406-4758f7c5551e\") " Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.361024 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sddfp\" (UniqueName: \"kubernetes.io/projected/c1a9695b-636b-4b29-a6dd-4e0708706b74-kube-api-access-sddfp\") pod \"c1a9695b-636b-4b29-a6dd-4e0708706b74\" (UID: \"c1a9695b-636b-4b29-a6dd-4e0708706b74\") " Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.361051 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/567e2fcc-e342-41e9-a406-4758f7c5551e-catalog-content\") pod \"567e2fcc-e342-41e9-a406-4758f7c5551e\" (UID: \"567e2fcc-e342-41e9-a406-4758f7c5551e\") " Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.361074 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95c4d\" (UniqueName: \"kubernetes.io/projected/a297c2d9-88a8-4019-94f5-c1f5498bee86-kube-api-access-95c4d\") pod \"a297c2d9-88a8-4019-94f5-c1f5498bee86\" (UID: \"a297c2d9-88a8-4019-94f5-c1f5498bee86\") " Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.361106 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/567e2fcc-e342-41e9-a406-4758f7c5551e-utilities\") pod \"567e2fcc-e342-41e9-a406-4758f7c5551e\" (UID: \"567e2fcc-e342-41e9-a406-4758f7c5551e\") " Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.361128 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2984d252-d29e-49b5-87ed-9ce7d19edc6d-catalog-content\") pod \"2984d252-d29e-49b5-87ed-9ce7d19edc6d\" (UID: \"2984d252-d29e-49b5-87ed-9ce7d19edc6d\") " Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.366287 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7147a2f9-6f8c-4fa5-b6da-a6a67a53e231-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "7147a2f9-6f8c-4fa5-b6da-a6a67a53e231" (UID: "7147a2f9-6f8c-4fa5-b6da-a6a67a53e231"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.371272 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7147a2f9-6f8c-4fa5-b6da-a6a67a53e231-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "7147a2f9-6f8c-4fa5-b6da-a6a67a53e231" (UID: "7147a2f9-6f8c-4fa5-b6da-a6a67a53e231"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.371593 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a297c2d9-88a8-4019-94f5-c1f5498bee86-utilities" (OuterVolumeSpecName: "utilities") pod "a297c2d9-88a8-4019-94f5-c1f5498bee86" (UID: "a297c2d9-88a8-4019-94f5-c1f5498bee86"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.371644 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1a9695b-636b-4b29-a6dd-4e0708706b74-kube-api-access-sddfp" (OuterVolumeSpecName: "kube-api-access-sddfp") pod "c1a9695b-636b-4b29-a6dd-4e0708706b74" (UID: "c1a9695b-636b-4b29-a6dd-4e0708706b74"). InnerVolumeSpecName "kube-api-access-sddfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.372246 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2984d252-d29e-49b5-87ed-9ce7d19edc6d-utilities" (OuterVolumeSpecName: "utilities") pod "2984d252-d29e-49b5-87ed-9ce7d19edc6d" (UID: "2984d252-d29e-49b5-87ed-9ce7d19edc6d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.373246 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567e2fcc-e342-41e9-a406-4758f7c5551e-utilities" (OuterVolumeSpecName: "utilities") pod "567e2fcc-e342-41e9-a406-4758f7c5551e" (UID: "567e2fcc-e342-41e9-a406-4758f7c5551e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.374703 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2984d252-d29e-49b5-87ed-9ce7d19edc6d-kube-api-access-kvng8" (OuterVolumeSpecName: "kube-api-access-kvng8") pod "2984d252-d29e-49b5-87ed-9ce7d19edc6d" (UID: "2984d252-d29e-49b5-87ed-9ce7d19edc6d"). InnerVolumeSpecName "kube-api-access-kvng8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.375347 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a297c2d9-88a8-4019-94f5-c1f5498bee86-kube-api-access-95c4d" (OuterVolumeSpecName: "kube-api-access-95c4d") pod "a297c2d9-88a8-4019-94f5-c1f5498bee86" (UID: "a297c2d9-88a8-4019-94f5-c1f5498bee86"). InnerVolumeSpecName "kube-api-access-95c4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.375915 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1a9695b-636b-4b29-a6dd-4e0708706b74-utilities" (OuterVolumeSpecName: "utilities") pod "c1a9695b-636b-4b29-a6dd-4e0708706b74" (UID: "c1a9695b-636b-4b29-a6dd-4e0708706b74"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.383634 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567e2fcc-e342-41e9-a406-4758f7c5551e-kube-api-access-rzmm7" (OuterVolumeSpecName: "kube-api-access-rzmm7") pod "567e2fcc-e342-41e9-a406-4758f7c5551e" (UID: "567e2fcc-e342-41e9-a406-4758f7c5551e"). InnerVolumeSpecName "kube-api-access-rzmm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.397279 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7147a2f9-6f8c-4fa5-b6da-a6a67a53e231-kube-api-access-b4l74" (OuterVolumeSpecName: "kube-api-access-b4l74") pod "7147a2f9-6f8c-4fa5-b6da-a6a67a53e231" (UID: "7147a2f9-6f8c-4fa5-b6da-a6a67a53e231"). InnerVolumeSpecName "kube-api-access-b4l74". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.402593 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lm499"] Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.412946 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a297c2d9-88a8-4019-94f5-c1f5498bee86-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a297c2d9-88a8-4019-94f5-c1f5498bee86" (UID: "a297c2d9-88a8-4019-94f5-c1f5498bee86"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.440097 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2984d252-d29e-49b5-87ed-9ce7d19edc6d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2984d252-d29e-49b5-87ed-9ce7d19edc6d" (UID: "2984d252-d29e-49b5-87ed-9ce7d19edc6d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.462432 4812 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7147a2f9-6f8c-4fa5-b6da-a6a67a53e231-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.462515 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a297c2d9-88a8-4019-94f5-c1f5498bee86-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.462529 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2984d252-d29e-49b5-87ed-9ce7d19edc6d-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.462539 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a297c2d9-88a8-4019-94f5-c1f5498bee86-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.462551 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvng8\" (UniqueName: \"kubernetes.io/projected/2984d252-d29e-49b5-87ed-9ce7d19edc6d-kube-api-access-kvng8\") on node \"crc\" DevicePath \"\"" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.462562 4812 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7147a2f9-6f8c-4fa5-b6da-a6a67a53e231-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.462572 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4l74\" (UniqueName: \"kubernetes.io/projected/7147a2f9-6f8c-4fa5-b6da-a6a67a53e231-kube-api-access-b4l74\") on node \"crc\" DevicePath \"\"" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.462582 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1a9695b-636b-4b29-a6dd-4e0708706b74-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.462593 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzmm7\" (UniqueName: \"kubernetes.io/projected/567e2fcc-e342-41e9-a406-4758f7c5551e-kube-api-access-rzmm7\") on node \"crc\" DevicePath \"\"" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.462606 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sddfp\" (UniqueName: \"kubernetes.io/projected/c1a9695b-636b-4b29-a6dd-4e0708706b74-kube-api-access-sddfp\") on node \"crc\" DevicePath \"\"" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.462617 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95c4d\" (UniqueName: \"kubernetes.io/projected/a297c2d9-88a8-4019-94f5-c1f5498bee86-kube-api-access-95c4d\") on node \"crc\" DevicePath \"\"" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.462628 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/567e2fcc-e342-41e9-a406-4758f7c5551e-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.462639 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2984d252-d29e-49b5-87ed-9ce7d19edc6d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.473170 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567e2fcc-e342-41e9-a406-4758f7c5551e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "567e2fcc-e342-41e9-a406-4758f7c5551e" (UID: "567e2fcc-e342-41e9-a406-4758f7c5551e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.518348 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1a9695b-636b-4b29-a6dd-4e0708706b74-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c1a9695b-636b-4b29-a6dd-4e0708706b74" (UID: "c1a9695b-636b-4b29-a6dd-4e0708706b74"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.564078 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1a9695b-636b-4b29-a6dd-4e0708706b74-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.564115 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/567e2fcc-e342-41e9-a406-4758f7c5551e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.875191 4812 generic.go:334] "Generic (PLEG): container finished" podID="2984d252-d29e-49b5-87ed-9ce7d19edc6d" containerID="d9a430ff631e78c5495fe28e0ee2a6e74352dbc43d3c25fa783905541e8adeb8" exitCode=0 Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.875246 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t9zmh" event={"ID":"2984d252-d29e-49b5-87ed-9ce7d19edc6d","Type":"ContainerDied","Data":"d9a430ff631e78c5495fe28e0ee2a6e74352dbc43d3c25fa783905541e8adeb8"} Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.875270 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t9zmh" event={"ID":"2984d252-d29e-49b5-87ed-9ce7d19edc6d","Type":"ContainerDied","Data":"0f0c5ee9c2deb094d00298afed04360761e56debce67357e8a92a4059eeddc94"} Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.875286 4812 scope.go:117] "RemoveContainer" containerID="d9a430ff631e78c5495fe28e0ee2a6e74352dbc43d3c25fa783905541e8adeb8" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.875403 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t9zmh" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.878478 4812 generic.go:334] "Generic (PLEG): container finished" podID="7147a2f9-6f8c-4fa5-b6da-a6a67a53e231" containerID="15d1583dafa02efaea6f9092a21449c4f85ba773664eb3dbf582c907b0eb91e7" exitCode=0 Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.878578 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kc7dg" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.880224 4812 generic.go:334] "Generic (PLEG): container finished" podID="c1a9695b-636b-4b29-a6dd-4e0708706b74" containerID="2375994974e7fd5af92b6e11947c9e40cbeeb5f3774b9a88742bec12ffb25c76" exitCode=0 Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.880332 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fjz4f" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.885460 4812 generic.go:334] "Generic (PLEG): container finished" podID="567e2fcc-e342-41e9-a406-4758f7c5551e" containerID="c5bb604f6590b8644e494f5182c4c1c2ba349637a33d8ed037a109968ac8efc3" exitCode=0 Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.885563 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gfhfv" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.900689 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kc7dg" event={"ID":"7147a2f9-6f8c-4fa5-b6da-a6a67a53e231","Type":"ContainerDied","Data":"15d1583dafa02efaea6f9092a21449c4f85ba773664eb3dbf582c907b0eb91e7"} Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.900741 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kc7dg" event={"ID":"7147a2f9-6f8c-4fa5-b6da-a6a67a53e231","Type":"ContainerDied","Data":"460297272a4ce6c46ac44814a6a9f2c00285028b21a8c551bc5fd3255afa82f8"} Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.900767 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fjz4f" event={"ID":"c1a9695b-636b-4b29-a6dd-4e0708706b74","Type":"ContainerDied","Data":"2375994974e7fd5af92b6e11947c9e40cbeeb5f3774b9a88742bec12ffb25c76"} Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.900821 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fjz4f" event={"ID":"c1a9695b-636b-4b29-a6dd-4e0708706b74","Type":"ContainerDied","Data":"ed7912f86767084d314f6753c182757c59658d27850b2c463ee654930d9e998a"} Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.900838 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gfhfv" event={"ID":"567e2fcc-e342-41e9-a406-4758f7c5551e","Type":"ContainerDied","Data":"c5bb604f6590b8644e494f5182c4c1c2ba349637a33d8ed037a109968ac8efc3"} Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.900854 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gfhfv" event={"ID":"567e2fcc-e342-41e9-a406-4758f7c5551e","Type":"ContainerDied","Data":"8389cdbf0a0a10f94ee1a07f13cf7eb695b55db174d6b85f28781e6e8f9eaaf2"} Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.914576 4812 scope.go:117] "RemoveContainer" containerID="1a26a6535f9fa9d65575d2c892045a0a7ae14e1e81452a06a7bd9f3ae6746df3" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.921051 4812 generic.go:334] "Generic (PLEG): container finished" podID="a297c2d9-88a8-4019-94f5-c1f5498bee86" containerID="4f5aa6f991a550486ac5ebd3aff4ba4ecb90d50f3311f426b006a90935908626" exitCode=0 Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.921608 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cqhcl" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.921555 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cqhcl" event={"ID":"a297c2d9-88a8-4019-94f5-c1f5498bee86","Type":"ContainerDied","Data":"4f5aa6f991a550486ac5ebd3aff4ba4ecb90d50f3311f426b006a90935908626"} Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.921664 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cqhcl" event={"ID":"a297c2d9-88a8-4019-94f5-c1f5498bee86","Type":"ContainerDied","Data":"f3903fcc4af8e2caff9ee0bfe3a6456dc94ac5a8f51e4b6855387036cc1485b2"} Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.924143 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lm499" event={"ID":"62c219b7-14b9-4105-8dcd-195446a4b07d","Type":"ContainerStarted","Data":"d37279c64b316658d3c538f4fa2af96f7868035948decafb53ded896e0d1fe2e"} Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.924182 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lm499" event={"ID":"62c219b7-14b9-4105-8dcd-195446a4b07d","Type":"ContainerStarted","Data":"cd1ec3a18e7917cc1f1e70d35cfd0bc5a47e21e0bb4f707291d9c2da195b6b90"} Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.925677 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-lm499" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.927868 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-lm499" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.944964 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-lm499" podStartSLOduration=1.9449447279999998 podStartE2EDuration="1.944944728s" podCreationTimestamp="2026-02-16 13:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:38:13.942098603 +0000 UTC m=+383.006429324" watchObservedRunningTime="2026-02-16 13:38:13.944944728 +0000 UTC m=+383.009275429" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.976736 4812 scope.go:117] "RemoveContainer" containerID="a0d9f7ca67851f90327c445631cdb6421d57b0c175a67e8823d1ff97f783fa73" Feb 16 13:38:13 crc kubenswrapper[4812]: I0216 13:38:13.999595 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t9zmh"] Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.008194 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-t9zmh"] Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.010529 4812 scope.go:117] "RemoveContainer" containerID="d9a430ff631e78c5495fe28e0ee2a6e74352dbc43d3c25fa783905541e8adeb8" Feb 16 13:38:14 crc kubenswrapper[4812]: E0216 13:38:14.010975 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9a430ff631e78c5495fe28e0ee2a6e74352dbc43d3c25fa783905541e8adeb8\": container with ID starting with d9a430ff631e78c5495fe28e0ee2a6e74352dbc43d3c25fa783905541e8adeb8 not found: ID does not exist" containerID="d9a430ff631e78c5495fe28e0ee2a6e74352dbc43d3c25fa783905541e8adeb8" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.011021 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9a430ff631e78c5495fe28e0ee2a6e74352dbc43d3c25fa783905541e8adeb8"} err="failed to get container status \"d9a430ff631e78c5495fe28e0ee2a6e74352dbc43d3c25fa783905541e8adeb8\": rpc error: code = NotFound desc = could not find container \"d9a430ff631e78c5495fe28e0ee2a6e74352dbc43d3c25fa783905541e8adeb8\": container with ID starting with d9a430ff631e78c5495fe28e0ee2a6e74352dbc43d3c25fa783905541e8adeb8 not found: ID does not exist" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.011052 4812 scope.go:117] "RemoveContainer" containerID="1a26a6535f9fa9d65575d2c892045a0a7ae14e1e81452a06a7bd9f3ae6746df3" Feb 16 13:38:14 crc kubenswrapper[4812]: E0216 13:38:14.011409 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a26a6535f9fa9d65575d2c892045a0a7ae14e1e81452a06a7bd9f3ae6746df3\": container with ID starting with 1a26a6535f9fa9d65575d2c892045a0a7ae14e1e81452a06a7bd9f3ae6746df3 not found: ID does not exist" containerID="1a26a6535f9fa9d65575d2c892045a0a7ae14e1e81452a06a7bd9f3ae6746df3" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.011435 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a26a6535f9fa9d65575d2c892045a0a7ae14e1e81452a06a7bd9f3ae6746df3"} err="failed to get container status \"1a26a6535f9fa9d65575d2c892045a0a7ae14e1e81452a06a7bd9f3ae6746df3\": rpc error: code = NotFound desc = could not find container \"1a26a6535f9fa9d65575d2c892045a0a7ae14e1e81452a06a7bd9f3ae6746df3\": container with ID starting with 1a26a6535f9fa9d65575d2c892045a0a7ae14e1e81452a06a7bd9f3ae6746df3 not found: ID does not exist" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.011488 4812 scope.go:117] "RemoveContainer" containerID="a0d9f7ca67851f90327c445631cdb6421d57b0c175a67e8823d1ff97f783fa73" Feb 16 13:38:14 crc kubenswrapper[4812]: E0216 13:38:14.011794 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0d9f7ca67851f90327c445631cdb6421d57b0c175a67e8823d1ff97f783fa73\": container with ID starting with a0d9f7ca67851f90327c445631cdb6421d57b0c175a67e8823d1ff97f783fa73 not found: ID does not exist" containerID="a0d9f7ca67851f90327c445631cdb6421d57b0c175a67e8823d1ff97f783fa73" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.011844 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0d9f7ca67851f90327c445631cdb6421d57b0c175a67e8823d1ff97f783fa73"} err="failed to get container status \"a0d9f7ca67851f90327c445631cdb6421d57b0c175a67e8823d1ff97f783fa73\": rpc error: code = NotFound desc = could not find container \"a0d9f7ca67851f90327c445631cdb6421d57b0c175a67e8823d1ff97f783fa73\": container with ID starting with a0d9f7ca67851f90327c445631cdb6421d57b0c175a67e8823d1ff97f783fa73 not found: ID does not exist" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.011871 4812 scope.go:117] "RemoveContainer" containerID="15d1583dafa02efaea6f9092a21449c4f85ba773664eb3dbf582c907b0eb91e7" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.012555 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fjz4f"] Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.015545 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fjz4f"] Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.027020 4812 scope.go:117] "RemoveContainer" containerID="15d1583dafa02efaea6f9092a21449c4f85ba773664eb3dbf582c907b0eb91e7" Feb 16 13:38:14 crc kubenswrapper[4812]: E0216 13:38:14.027493 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15d1583dafa02efaea6f9092a21449c4f85ba773664eb3dbf582c907b0eb91e7\": container with ID starting with 15d1583dafa02efaea6f9092a21449c4f85ba773664eb3dbf582c907b0eb91e7 not found: ID does not exist" containerID="15d1583dafa02efaea6f9092a21449c4f85ba773664eb3dbf582c907b0eb91e7" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.027541 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15d1583dafa02efaea6f9092a21449c4f85ba773664eb3dbf582c907b0eb91e7"} err="failed to get container status \"15d1583dafa02efaea6f9092a21449c4f85ba773664eb3dbf582c907b0eb91e7\": rpc error: code = NotFound desc = could not find container \"15d1583dafa02efaea6f9092a21449c4f85ba773664eb3dbf582c907b0eb91e7\": container with ID starting with 15d1583dafa02efaea6f9092a21449c4f85ba773664eb3dbf582c907b0eb91e7 not found: ID does not exist" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.027565 4812 scope.go:117] "RemoveContainer" containerID="2375994974e7fd5af92b6e11947c9e40cbeeb5f3774b9a88742bec12ffb25c76" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.032706 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kc7dg"] Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.037810 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kc7dg"] Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.042004 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cqhcl"] Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.045767 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cqhcl"] Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.048042 4812 scope.go:117] "RemoveContainer" containerID="8181840cd3084798a64ba8046f1bbf1e3e6ddc841abbec235ea39855c82de027" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.067151 4812 scope.go:117] "RemoveContainer" containerID="e130ecd9d28e4102e73d87046fcf9b46f5f152f109446dd01a647e2379cd8469" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.083288 4812 scope.go:117] "RemoveContainer" containerID="2375994974e7fd5af92b6e11947c9e40cbeeb5f3774b9a88742bec12ffb25c76" Feb 16 13:38:14 crc kubenswrapper[4812]: E0216 13:38:14.083808 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2375994974e7fd5af92b6e11947c9e40cbeeb5f3774b9a88742bec12ffb25c76\": container with ID starting with 2375994974e7fd5af92b6e11947c9e40cbeeb5f3774b9a88742bec12ffb25c76 not found: ID does not exist" containerID="2375994974e7fd5af92b6e11947c9e40cbeeb5f3774b9a88742bec12ffb25c76" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.083845 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2375994974e7fd5af92b6e11947c9e40cbeeb5f3774b9a88742bec12ffb25c76"} err="failed to get container status \"2375994974e7fd5af92b6e11947c9e40cbeeb5f3774b9a88742bec12ffb25c76\": rpc error: code = NotFound desc = could not find container \"2375994974e7fd5af92b6e11947c9e40cbeeb5f3774b9a88742bec12ffb25c76\": container with ID starting with 2375994974e7fd5af92b6e11947c9e40cbeeb5f3774b9a88742bec12ffb25c76 not found: ID does not exist" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.083872 4812 scope.go:117] "RemoveContainer" containerID="8181840cd3084798a64ba8046f1bbf1e3e6ddc841abbec235ea39855c82de027" Feb 16 13:38:14 crc kubenswrapper[4812]: E0216 13:38:14.084167 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8181840cd3084798a64ba8046f1bbf1e3e6ddc841abbec235ea39855c82de027\": container with ID starting with 8181840cd3084798a64ba8046f1bbf1e3e6ddc841abbec235ea39855c82de027 not found: ID does not exist" containerID="8181840cd3084798a64ba8046f1bbf1e3e6ddc841abbec235ea39855c82de027" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.084199 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8181840cd3084798a64ba8046f1bbf1e3e6ddc841abbec235ea39855c82de027"} err="failed to get container status \"8181840cd3084798a64ba8046f1bbf1e3e6ddc841abbec235ea39855c82de027\": rpc error: code = NotFound desc = could not find container \"8181840cd3084798a64ba8046f1bbf1e3e6ddc841abbec235ea39855c82de027\": container with ID starting with 8181840cd3084798a64ba8046f1bbf1e3e6ddc841abbec235ea39855c82de027 not found: ID does not exist" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.084218 4812 scope.go:117] "RemoveContainer" containerID="e130ecd9d28e4102e73d87046fcf9b46f5f152f109446dd01a647e2379cd8469" Feb 16 13:38:14 crc kubenswrapper[4812]: E0216 13:38:14.084648 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e130ecd9d28e4102e73d87046fcf9b46f5f152f109446dd01a647e2379cd8469\": container with ID starting with e130ecd9d28e4102e73d87046fcf9b46f5f152f109446dd01a647e2379cd8469 not found: ID does not exist" containerID="e130ecd9d28e4102e73d87046fcf9b46f5f152f109446dd01a647e2379cd8469" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.084715 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e130ecd9d28e4102e73d87046fcf9b46f5f152f109446dd01a647e2379cd8469"} err="failed to get container status \"e130ecd9d28e4102e73d87046fcf9b46f5f152f109446dd01a647e2379cd8469\": rpc error: code = NotFound desc = could not find container \"e130ecd9d28e4102e73d87046fcf9b46f5f152f109446dd01a647e2379cd8469\": container with ID starting with e130ecd9d28e4102e73d87046fcf9b46f5f152f109446dd01a647e2379cd8469 not found: ID does not exist" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.084753 4812 scope.go:117] "RemoveContainer" containerID="c5bb604f6590b8644e494f5182c4c1c2ba349637a33d8ed037a109968ac8efc3" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.099504 4812 scope.go:117] "RemoveContainer" containerID="c5cde2928e76ed34ec0873c532eb57bc500471adc517cf29b2cdc9ff26cd725c" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.112698 4812 scope.go:117] "RemoveContainer" containerID="c3e448bd6746dd721c7c10de8c0cc104f4f250b3b1b2190af14861eb650cdb84" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.127109 4812 scope.go:117] "RemoveContainer" containerID="c5bb604f6590b8644e494f5182c4c1c2ba349637a33d8ed037a109968ac8efc3" Feb 16 13:38:14 crc kubenswrapper[4812]: E0216 13:38:14.127600 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5bb604f6590b8644e494f5182c4c1c2ba349637a33d8ed037a109968ac8efc3\": container with ID starting with c5bb604f6590b8644e494f5182c4c1c2ba349637a33d8ed037a109968ac8efc3 not found: ID does not exist" containerID="c5bb604f6590b8644e494f5182c4c1c2ba349637a33d8ed037a109968ac8efc3" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.127634 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5bb604f6590b8644e494f5182c4c1c2ba349637a33d8ed037a109968ac8efc3"} err="failed to get container status \"c5bb604f6590b8644e494f5182c4c1c2ba349637a33d8ed037a109968ac8efc3\": rpc error: code = NotFound desc = could not find container \"c5bb604f6590b8644e494f5182c4c1c2ba349637a33d8ed037a109968ac8efc3\": container with ID starting with c5bb604f6590b8644e494f5182c4c1c2ba349637a33d8ed037a109968ac8efc3 not found: ID does not exist" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.127659 4812 scope.go:117] "RemoveContainer" containerID="c5cde2928e76ed34ec0873c532eb57bc500471adc517cf29b2cdc9ff26cd725c" Feb 16 13:38:14 crc kubenswrapper[4812]: E0216 13:38:14.128051 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5cde2928e76ed34ec0873c532eb57bc500471adc517cf29b2cdc9ff26cd725c\": container with ID starting with c5cde2928e76ed34ec0873c532eb57bc500471adc517cf29b2cdc9ff26cd725c not found: ID does not exist" containerID="c5cde2928e76ed34ec0873c532eb57bc500471adc517cf29b2cdc9ff26cd725c" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.128095 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5cde2928e76ed34ec0873c532eb57bc500471adc517cf29b2cdc9ff26cd725c"} err="failed to get container status \"c5cde2928e76ed34ec0873c532eb57bc500471adc517cf29b2cdc9ff26cd725c\": rpc error: code = NotFound desc = could not find container \"c5cde2928e76ed34ec0873c532eb57bc500471adc517cf29b2cdc9ff26cd725c\": container with ID starting with c5cde2928e76ed34ec0873c532eb57bc500471adc517cf29b2cdc9ff26cd725c not found: ID does not exist" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.128122 4812 scope.go:117] "RemoveContainer" containerID="c3e448bd6746dd721c7c10de8c0cc104f4f250b3b1b2190af14861eb650cdb84" Feb 16 13:38:14 crc kubenswrapper[4812]: E0216 13:38:14.128429 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3e448bd6746dd721c7c10de8c0cc104f4f250b3b1b2190af14861eb650cdb84\": container with ID starting with c3e448bd6746dd721c7c10de8c0cc104f4f250b3b1b2190af14861eb650cdb84 not found: ID does not exist" containerID="c3e448bd6746dd721c7c10de8c0cc104f4f250b3b1b2190af14861eb650cdb84" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.128475 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3e448bd6746dd721c7c10de8c0cc104f4f250b3b1b2190af14861eb650cdb84"} err="failed to get container status \"c3e448bd6746dd721c7c10de8c0cc104f4f250b3b1b2190af14861eb650cdb84\": rpc error: code = NotFound desc = could not find container \"c3e448bd6746dd721c7c10de8c0cc104f4f250b3b1b2190af14861eb650cdb84\": container with ID starting with c3e448bd6746dd721c7c10de8c0cc104f4f250b3b1b2190af14861eb650cdb84 not found: ID does not exist" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.128494 4812 scope.go:117] "RemoveContainer" containerID="4f5aa6f991a550486ac5ebd3aff4ba4ecb90d50f3311f426b006a90935908626" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.140222 4812 scope.go:117] "RemoveContainer" containerID="3a06eaf7fdb4d05744ed3eca6f60920e828573eafbbf1a081f69d46f05441696" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.151359 4812 scope.go:117] "RemoveContainer" containerID="e50852e67a262f14f518869331f31c8296e0f369ece81f2b52a246918b26dc76" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.163206 4812 scope.go:117] "RemoveContainer" containerID="4f5aa6f991a550486ac5ebd3aff4ba4ecb90d50f3311f426b006a90935908626" Feb 16 13:38:14 crc kubenswrapper[4812]: E0216 13:38:14.163819 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f5aa6f991a550486ac5ebd3aff4ba4ecb90d50f3311f426b006a90935908626\": container with ID starting with 4f5aa6f991a550486ac5ebd3aff4ba4ecb90d50f3311f426b006a90935908626 not found: ID does not exist" containerID="4f5aa6f991a550486ac5ebd3aff4ba4ecb90d50f3311f426b006a90935908626" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.163868 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f5aa6f991a550486ac5ebd3aff4ba4ecb90d50f3311f426b006a90935908626"} err="failed to get container status \"4f5aa6f991a550486ac5ebd3aff4ba4ecb90d50f3311f426b006a90935908626\": rpc error: code = NotFound desc = could not find container \"4f5aa6f991a550486ac5ebd3aff4ba4ecb90d50f3311f426b006a90935908626\": container with ID starting with 4f5aa6f991a550486ac5ebd3aff4ba4ecb90d50f3311f426b006a90935908626 not found: ID does not exist" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.163900 4812 scope.go:117] "RemoveContainer" containerID="3a06eaf7fdb4d05744ed3eca6f60920e828573eafbbf1a081f69d46f05441696" Feb 16 13:38:14 crc kubenswrapper[4812]: E0216 13:38:14.164306 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a06eaf7fdb4d05744ed3eca6f60920e828573eafbbf1a081f69d46f05441696\": container with ID starting with 3a06eaf7fdb4d05744ed3eca6f60920e828573eafbbf1a081f69d46f05441696 not found: ID does not exist" containerID="3a06eaf7fdb4d05744ed3eca6f60920e828573eafbbf1a081f69d46f05441696" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.164371 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a06eaf7fdb4d05744ed3eca6f60920e828573eafbbf1a081f69d46f05441696"} err="failed to get container status \"3a06eaf7fdb4d05744ed3eca6f60920e828573eafbbf1a081f69d46f05441696\": rpc error: code = NotFound desc = could not find container \"3a06eaf7fdb4d05744ed3eca6f60920e828573eafbbf1a081f69d46f05441696\": container with ID starting with 3a06eaf7fdb4d05744ed3eca6f60920e828573eafbbf1a081f69d46f05441696 not found: ID does not exist" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.164416 4812 scope.go:117] "RemoveContainer" containerID="e50852e67a262f14f518869331f31c8296e0f369ece81f2b52a246918b26dc76" Feb 16 13:38:14 crc kubenswrapper[4812]: E0216 13:38:14.164739 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e50852e67a262f14f518869331f31c8296e0f369ece81f2b52a246918b26dc76\": container with ID starting with e50852e67a262f14f518869331f31c8296e0f369ece81f2b52a246918b26dc76 not found: ID does not exist" containerID="e50852e67a262f14f518869331f31c8296e0f369ece81f2b52a246918b26dc76" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.164773 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e50852e67a262f14f518869331f31c8296e0f369ece81f2b52a246918b26dc76"} err="failed to get container status \"e50852e67a262f14f518869331f31c8296e0f369ece81f2b52a246918b26dc76\": rpc error: code = NotFound desc = could not find container \"e50852e67a262f14f518869331f31c8296e0f369ece81f2b52a246918b26dc76\": container with ID starting with e50852e67a262f14f518869331f31c8296e0f369ece81f2b52a246918b26dc76 not found: ID does not exist" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.549424 4812 patch_prober.go:28] interesting pod/machine-config-daemon-c6mn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.549546 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.977858 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wspdc"] Feb 16 13:38:14 crc kubenswrapper[4812]: E0216 13:38:14.978122 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1a9695b-636b-4b29-a6dd-4e0708706b74" containerName="extract-content" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.978137 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1a9695b-636b-4b29-a6dd-4e0708706b74" containerName="extract-content" Feb 16 13:38:14 crc kubenswrapper[4812]: E0216 13:38:14.978145 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a297c2d9-88a8-4019-94f5-c1f5498bee86" containerName="registry-server" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.978152 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a297c2d9-88a8-4019-94f5-c1f5498bee86" containerName="registry-server" Feb 16 13:38:14 crc kubenswrapper[4812]: E0216 13:38:14.978165 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="567e2fcc-e342-41e9-a406-4758f7c5551e" containerName="extract-content" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.978173 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="567e2fcc-e342-41e9-a406-4758f7c5551e" containerName="extract-content" Feb 16 13:38:14 crc kubenswrapper[4812]: E0216 13:38:14.978183 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a297c2d9-88a8-4019-94f5-c1f5498bee86" containerName="extract-utilities" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.978191 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a297c2d9-88a8-4019-94f5-c1f5498bee86" containerName="extract-utilities" Feb 16 13:38:14 crc kubenswrapper[4812]: E0216 13:38:14.978207 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a297c2d9-88a8-4019-94f5-c1f5498bee86" containerName="extract-content" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.978215 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a297c2d9-88a8-4019-94f5-c1f5498bee86" containerName="extract-content" Feb 16 13:38:14 crc kubenswrapper[4812]: E0216 13:38:14.978225 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2984d252-d29e-49b5-87ed-9ce7d19edc6d" containerName="registry-server" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.978232 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="2984d252-d29e-49b5-87ed-9ce7d19edc6d" containerName="registry-server" Feb 16 13:38:14 crc kubenswrapper[4812]: E0216 13:38:14.978240 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1a9695b-636b-4b29-a6dd-4e0708706b74" containerName="registry-server" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.978248 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1a9695b-636b-4b29-a6dd-4e0708706b74" containerName="registry-server" Feb 16 13:38:14 crc kubenswrapper[4812]: E0216 13:38:14.978256 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2984d252-d29e-49b5-87ed-9ce7d19edc6d" containerName="extract-utilities" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.978262 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="2984d252-d29e-49b5-87ed-9ce7d19edc6d" containerName="extract-utilities" Feb 16 13:38:14 crc kubenswrapper[4812]: E0216 13:38:14.978272 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="567e2fcc-e342-41e9-a406-4758f7c5551e" containerName="registry-server" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.978279 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="567e2fcc-e342-41e9-a406-4758f7c5551e" containerName="registry-server" Feb 16 13:38:14 crc kubenswrapper[4812]: E0216 13:38:14.978292 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2984d252-d29e-49b5-87ed-9ce7d19edc6d" containerName="extract-content" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.978299 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="2984d252-d29e-49b5-87ed-9ce7d19edc6d" containerName="extract-content" Feb 16 13:38:14 crc kubenswrapper[4812]: E0216 13:38:14.978307 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1a9695b-636b-4b29-a6dd-4e0708706b74" containerName="extract-utilities" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.978314 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1a9695b-636b-4b29-a6dd-4e0708706b74" containerName="extract-utilities" Feb 16 13:38:14 crc kubenswrapper[4812]: E0216 13:38:14.978373 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="567e2fcc-e342-41e9-a406-4758f7c5551e" containerName="extract-utilities" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.978383 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="567e2fcc-e342-41e9-a406-4758f7c5551e" containerName="extract-utilities" Feb 16 13:38:14 crc kubenswrapper[4812]: E0216 13:38:14.978392 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7147a2f9-6f8c-4fa5-b6da-a6a67a53e231" containerName="marketplace-operator" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.978398 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="7147a2f9-6f8c-4fa5-b6da-a6a67a53e231" containerName="marketplace-operator" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.978534 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="2984d252-d29e-49b5-87ed-9ce7d19edc6d" containerName="registry-server" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.978551 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1a9695b-636b-4b29-a6dd-4e0708706b74" containerName="registry-server" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.978565 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a297c2d9-88a8-4019-94f5-c1f5498bee86" containerName="registry-server" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.978573 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="567e2fcc-e342-41e9-a406-4758f7c5551e" containerName="registry-server" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.978580 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="7147a2f9-6f8c-4fa5-b6da-a6a67a53e231" containerName="marketplace-operator" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.979394 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wspdc" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.981423 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 13:38:14 crc kubenswrapper[4812]: I0216 13:38:14.988456 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wspdc"] Feb 16 13:38:15 crc kubenswrapper[4812]: I0216 13:38:15.082080 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97f0d30e-e1e9-4b04-a667-9774b17b6e1d-utilities\") pod \"redhat-marketplace-wspdc\" (UID: \"97f0d30e-e1e9-4b04-a667-9774b17b6e1d\") " pod="openshift-marketplace/redhat-marketplace-wspdc" Feb 16 13:38:15 crc kubenswrapper[4812]: I0216 13:38:15.082141 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97f0d30e-e1e9-4b04-a667-9774b17b6e1d-catalog-content\") pod \"redhat-marketplace-wspdc\" (UID: \"97f0d30e-e1e9-4b04-a667-9774b17b6e1d\") " pod="openshift-marketplace/redhat-marketplace-wspdc" Feb 16 13:38:15 crc kubenswrapper[4812]: I0216 13:38:15.082242 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjzt4\" (UniqueName: \"kubernetes.io/projected/97f0d30e-e1e9-4b04-a667-9774b17b6e1d-kube-api-access-sjzt4\") pod \"redhat-marketplace-wspdc\" (UID: \"97f0d30e-e1e9-4b04-a667-9774b17b6e1d\") " pod="openshift-marketplace/redhat-marketplace-wspdc" Feb 16 13:38:15 crc kubenswrapper[4812]: I0216 13:38:15.183476 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjzt4\" (UniqueName: \"kubernetes.io/projected/97f0d30e-e1e9-4b04-a667-9774b17b6e1d-kube-api-access-sjzt4\") pod \"redhat-marketplace-wspdc\" (UID: \"97f0d30e-e1e9-4b04-a667-9774b17b6e1d\") " pod="openshift-marketplace/redhat-marketplace-wspdc" Feb 16 13:38:15 crc kubenswrapper[4812]: I0216 13:38:15.183544 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97f0d30e-e1e9-4b04-a667-9774b17b6e1d-utilities\") pod \"redhat-marketplace-wspdc\" (UID: \"97f0d30e-e1e9-4b04-a667-9774b17b6e1d\") " pod="openshift-marketplace/redhat-marketplace-wspdc" Feb 16 13:38:15 crc kubenswrapper[4812]: I0216 13:38:15.183577 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97f0d30e-e1e9-4b04-a667-9774b17b6e1d-catalog-content\") pod \"redhat-marketplace-wspdc\" (UID: \"97f0d30e-e1e9-4b04-a667-9774b17b6e1d\") " pod="openshift-marketplace/redhat-marketplace-wspdc" Feb 16 13:38:15 crc kubenswrapper[4812]: I0216 13:38:15.184345 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97f0d30e-e1e9-4b04-a667-9774b17b6e1d-catalog-content\") pod \"redhat-marketplace-wspdc\" (UID: \"97f0d30e-e1e9-4b04-a667-9774b17b6e1d\") " pod="openshift-marketplace/redhat-marketplace-wspdc" Feb 16 13:38:15 crc kubenswrapper[4812]: I0216 13:38:15.184545 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97f0d30e-e1e9-4b04-a667-9774b17b6e1d-utilities\") pod \"redhat-marketplace-wspdc\" (UID: \"97f0d30e-e1e9-4b04-a667-9774b17b6e1d\") " pod="openshift-marketplace/redhat-marketplace-wspdc" Feb 16 13:38:15 crc kubenswrapper[4812]: I0216 13:38:15.199972 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjzt4\" (UniqueName: \"kubernetes.io/projected/97f0d30e-e1e9-4b04-a667-9774b17b6e1d-kube-api-access-sjzt4\") pod \"redhat-marketplace-wspdc\" (UID: \"97f0d30e-e1e9-4b04-a667-9774b17b6e1d\") " pod="openshift-marketplace/redhat-marketplace-wspdc" Feb 16 13:38:15 crc kubenswrapper[4812]: I0216 13:38:15.304573 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wspdc" Feb 16 13:38:15 crc kubenswrapper[4812]: I0216 13:38:15.484488 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wspdc"] Feb 16 13:38:15 crc kubenswrapper[4812]: I0216 13:38:15.896833 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2984d252-d29e-49b5-87ed-9ce7d19edc6d" path="/var/lib/kubelet/pods/2984d252-d29e-49b5-87ed-9ce7d19edc6d/volumes" Feb 16 13:38:15 crc kubenswrapper[4812]: I0216 13:38:15.898259 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7147a2f9-6f8c-4fa5-b6da-a6a67a53e231" path="/var/lib/kubelet/pods/7147a2f9-6f8c-4fa5-b6da-a6a67a53e231/volumes" Feb 16 13:38:15 crc kubenswrapper[4812]: I0216 13:38:15.899113 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a297c2d9-88a8-4019-94f5-c1f5498bee86" path="/var/lib/kubelet/pods/a297c2d9-88a8-4019-94f5-c1f5498bee86/volumes" Feb 16 13:38:15 crc kubenswrapper[4812]: I0216 13:38:15.901715 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1a9695b-636b-4b29-a6dd-4e0708706b74" path="/var/lib/kubelet/pods/c1a9695b-636b-4b29-a6dd-4e0708706b74/volumes" Feb 16 13:38:15 crc kubenswrapper[4812]: I0216 13:38:15.940482 4812 generic.go:334] "Generic (PLEG): container finished" podID="97f0d30e-e1e9-4b04-a667-9774b17b6e1d" containerID="80b810745b037de16529fc4352c2984b71ddbb4b5a924681fc2d66d3bbeb07ab" exitCode=0 Feb 16 13:38:15 crc kubenswrapper[4812]: I0216 13:38:15.940549 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wspdc" event={"ID":"97f0d30e-e1e9-4b04-a667-9774b17b6e1d","Type":"ContainerDied","Data":"80b810745b037de16529fc4352c2984b71ddbb4b5a924681fc2d66d3bbeb07ab"} Feb 16 13:38:15 crc kubenswrapper[4812]: I0216 13:38:15.940609 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wspdc" event={"ID":"97f0d30e-e1e9-4b04-a667-9774b17b6e1d","Type":"ContainerStarted","Data":"be6a6eb83914d67707b183ac4529b0f4f206749a36d21d2fb091e679157abaf0"} Feb 16 13:38:16 crc kubenswrapper[4812]: I0216 13:38:16.946704 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wspdc" event={"ID":"97f0d30e-e1e9-4b04-a667-9774b17b6e1d","Type":"ContainerStarted","Data":"ec7eb9935da22e67eb81c42954eeabbbb820821a26bf3d7d4210f1e7e7324dc3"} Feb 16 13:38:17 crc kubenswrapper[4812]: I0216 13:38:17.178036 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k2kpr"] Feb 16 13:38:17 crc kubenswrapper[4812]: I0216 13:38:17.179245 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2kpr" Feb 16 13:38:17 crc kubenswrapper[4812]: I0216 13:38:17.182215 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 13:38:17 crc kubenswrapper[4812]: I0216 13:38:17.194935 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k2kpr"] Feb 16 13:38:17 crc kubenswrapper[4812]: I0216 13:38:17.214024 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plbt4\" (UniqueName: \"kubernetes.io/projected/8cb0d463-0679-4810-a6fa-7e56d77677db-kube-api-access-plbt4\") pod \"redhat-operators-k2kpr\" (UID: \"8cb0d463-0679-4810-a6fa-7e56d77677db\") " pod="openshift-marketplace/redhat-operators-k2kpr" Feb 16 13:38:17 crc kubenswrapper[4812]: I0216 13:38:17.214096 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cb0d463-0679-4810-a6fa-7e56d77677db-catalog-content\") pod \"redhat-operators-k2kpr\" (UID: \"8cb0d463-0679-4810-a6fa-7e56d77677db\") " pod="openshift-marketplace/redhat-operators-k2kpr" Feb 16 13:38:17 crc kubenswrapper[4812]: I0216 13:38:17.214133 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cb0d463-0679-4810-a6fa-7e56d77677db-utilities\") pod \"redhat-operators-k2kpr\" (UID: \"8cb0d463-0679-4810-a6fa-7e56d77677db\") " pod="openshift-marketplace/redhat-operators-k2kpr" Feb 16 13:38:17 crc kubenswrapper[4812]: I0216 13:38:17.314886 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plbt4\" (UniqueName: \"kubernetes.io/projected/8cb0d463-0679-4810-a6fa-7e56d77677db-kube-api-access-plbt4\") pod \"redhat-operators-k2kpr\" (UID: \"8cb0d463-0679-4810-a6fa-7e56d77677db\") " pod="openshift-marketplace/redhat-operators-k2kpr" Feb 16 13:38:17 crc kubenswrapper[4812]: I0216 13:38:17.314961 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cb0d463-0679-4810-a6fa-7e56d77677db-catalog-content\") pod \"redhat-operators-k2kpr\" (UID: \"8cb0d463-0679-4810-a6fa-7e56d77677db\") " pod="openshift-marketplace/redhat-operators-k2kpr" Feb 16 13:38:17 crc kubenswrapper[4812]: I0216 13:38:17.315011 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cb0d463-0679-4810-a6fa-7e56d77677db-utilities\") pod \"redhat-operators-k2kpr\" (UID: \"8cb0d463-0679-4810-a6fa-7e56d77677db\") " pod="openshift-marketplace/redhat-operators-k2kpr" Feb 16 13:38:17 crc kubenswrapper[4812]: I0216 13:38:17.315506 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cb0d463-0679-4810-a6fa-7e56d77677db-utilities\") pod \"redhat-operators-k2kpr\" (UID: \"8cb0d463-0679-4810-a6fa-7e56d77677db\") " pod="openshift-marketplace/redhat-operators-k2kpr" Feb 16 13:38:17 crc kubenswrapper[4812]: I0216 13:38:17.315589 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cb0d463-0679-4810-a6fa-7e56d77677db-catalog-content\") pod \"redhat-operators-k2kpr\" (UID: \"8cb0d463-0679-4810-a6fa-7e56d77677db\") " pod="openshift-marketplace/redhat-operators-k2kpr" Feb 16 13:38:17 crc kubenswrapper[4812]: I0216 13:38:17.334724 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plbt4\" (UniqueName: \"kubernetes.io/projected/8cb0d463-0679-4810-a6fa-7e56d77677db-kube-api-access-plbt4\") pod \"redhat-operators-k2kpr\" (UID: \"8cb0d463-0679-4810-a6fa-7e56d77677db\") " pod="openshift-marketplace/redhat-operators-k2kpr" Feb 16 13:38:17 crc kubenswrapper[4812]: I0216 13:38:17.377188 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4f7gt"] Feb 16 13:38:17 crc kubenswrapper[4812]: I0216 13:38:17.378082 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4f7gt" Feb 16 13:38:17 crc kubenswrapper[4812]: I0216 13:38:17.381017 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 13:38:17 crc kubenswrapper[4812]: I0216 13:38:17.392530 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4f7gt"] Feb 16 13:38:17 crc kubenswrapper[4812]: I0216 13:38:17.416150 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dcfe737-220e-464b-b4dd-7956ceec99b6-catalog-content\") pod \"community-operators-4f7gt\" (UID: \"4dcfe737-220e-464b-b4dd-7956ceec99b6\") " pod="openshift-marketplace/community-operators-4f7gt" Feb 16 13:38:17 crc kubenswrapper[4812]: I0216 13:38:17.416218 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dcfe737-220e-464b-b4dd-7956ceec99b6-utilities\") pod \"community-operators-4f7gt\" (UID: \"4dcfe737-220e-464b-b4dd-7956ceec99b6\") " pod="openshift-marketplace/community-operators-4f7gt" Feb 16 13:38:17 crc kubenswrapper[4812]: I0216 13:38:17.416251 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv77x\" (UniqueName: \"kubernetes.io/projected/4dcfe737-220e-464b-b4dd-7956ceec99b6-kube-api-access-mv77x\") pod \"community-operators-4f7gt\" (UID: \"4dcfe737-220e-464b-b4dd-7956ceec99b6\") " pod="openshift-marketplace/community-operators-4f7gt" Feb 16 13:38:17 crc kubenswrapper[4812]: I0216 13:38:17.494826 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2kpr" Feb 16 13:38:17 crc kubenswrapper[4812]: I0216 13:38:17.516957 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv77x\" (UniqueName: \"kubernetes.io/projected/4dcfe737-220e-464b-b4dd-7956ceec99b6-kube-api-access-mv77x\") pod \"community-operators-4f7gt\" (UID: \"4dcfe737-220e-464b-b4dd-7956ceec99b6\") " pod="openshift-marketplace/community-operators-4f7gt" Feb 16 13:38:17 crc kubenswrapper[4812]: I0216 13:38:17.517067 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dcfe737-220e-464b-b4dd-7956ceec99b6-catalog-content\") pod \"community-operators-4f7gt\" (UID: \"4dcfe737-220e-464b-b4dd-7956ceec99b6\") " pod="openshift-marketplace/community-operators-4f7gt" Feb 16 13:38:17 crc kubenswrapper[4812]: I0216 13:38:17.517131 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dcfe737-220e-464b-b4dd-7956ceec99b6-utilities\") pod \"community-operators-4f7gt\" (UID: \"4dcfe737-220e-464b-b4dd-7956ceec99b6\") " pod="openshift-marketplace/community-operators-4f7gt" Feb 16 13:38:17 crc kubenswrapper[4812]: I0216 13:38:17.517720 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dcfe737-220e-464b-b4dd-7956ceec99b6-utilities\") pod \"community-operators-4f7gt\" (UID: \"4dcfe737-220e-464b-b4dd-7956ceec99b6\") " pod="openshift-marketplace/community-operators-4f7gt" Feb 16 13:38:17 crc kubenswrapper[4812]: I0216 13:38:17.517807 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dcfe737-220e-464b-b4dd-7956ceec99b6-catalog-content\") pod \"community-operators-4f7gt\" (UID: \"4dcfe737-220e-464b-b4dd-7956ceec99b6\") " pod="openshift-marketplace/community-operators-4f7gt" Feb 16 13:38:17 crc kubenswrapper[4812]: I0216 13:38:17.539793 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv77x\" (UniqueName: \"kubernetes.io/projected/4dcfe737-220e-464b-b4dd-7956ceec99b6-kube-api-access-mv77x\") pod \"community-operators-4f7gt\" (UID: \"4dcfe737-220e-464b-b4dd-7956ceec99b6\") " pod="openshift-marketplace/community-operators-4f7gt" Feb 16 13:38:17 crc kubenswrapper[4812]: I0216 13:38:17.673146 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k2kpr"] Feb 16 13:38:17 crc kubenswrapper[4812]: I0216 13:38:17.718295 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4f7gt" Feb 16 13:38:17 crc kubenswrapper[4812]: I0216 13:38:17.873509 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4f7gt"] Feb 16 13:38:17 crc kubenswrapper[4812]: W0216 13:38:17.886650 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4dcfe737_220e_464b_b4dd_7956ceec99b6.slice/crio-b89767f8e96ec84e114463b685535cb910c9006e9d69847f06c4df1ef68d2e42 WatchSource:0}: Error finding container b89767f8e96ec84e114463b685535cb910c9006e9d69847f06c4df1ef68d2e42: Status 404 returned error can't find the container with id b89767f8e96ec84e114463b685535cb910c9006e9d69847f06c4df1ef68d2e42 Feb 16 13:38:17 crc kubenswrapper[4812]: I0216 13:38:17.953763 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4f7gt" event={"ID":"4dcfe737-220e-464b-b4dd-7956ceec99b6","Type":"ContainerStarted","Data":"b89767f8e96ec84e114463b685535cb910c9006e9d69847f06c4df1ef68d2e42"} Feb 16 13:38:17 crc kubenswrapper[4812]: I0216 13:38:17.957379 4812 generic.go:334] "Generic (PLEG): container finished" podID="97f0d30e-e1e9-4b04-a667-9774b17b6e1d" containerID="ec7eb9935da22e67eb81c42954eeabbbb820821a26bf3d7d4210f1e7e7324dc3" exitCode=0 Feb 16 13:38:17 crc kubenswrapper[4812]: I0216 13:38:17.957505 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wspdc" event={"ID":"97f0d30e-e1e9-4b04-a667-9774b17b6e1d","Type":"ContainerDied","Data":"ec7eb9935da22e67eb81c42954eeabbbb820821a26bf3d7d4210f1e7e7324dc3"} Feb 16 13:38:17 crc kubenswrapper[4812]: I0216 13:38:17.966407 4812 generic.go:334] "Generic (PLEG): container finished" podID="8cb0d463-0679-4810-a6fa-7e56d77677db" containerID="e6c8e1d33f04deaa7f27500f4a6d9d2569d177659887f34833928af9a4a8969e" exitCode=0 Feb 16 13:38:17 crc kubenswrapper[4812]: I0216 13:38:17.966655 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2kpr" event={"ID":"8cb0d463-0679-4810-a6fa-7e56d77677db","Type":"ContainerDied","Data":"e6c8e1d33f04deaa7f27500f4a6d9d2569d177659887f34833928af9a4a8969e"} Feb 16 13:38:17 crc kubenswrapper[4812]: I0216 13:38:17.966682 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2kpr" event={"ID":"8cb0d463-0679-4810-a6fa-7e56d77677db","Type":"ContainerStarted","Data":"0372b2d7c9d12deff7893c6fec4f813d285ddedf053e3e1a04d24888c2bd4a2d"} Feb 16 13:38:18 crc kubenswrapper[4812]: I0216 13:38:18.974585 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wspdc" event={"ID":"97f0d30e-e1e9-4b04-a667-9774b17b6e1d","Type":"ContainerStarted","Data":"40270c584342941200085083310adc96eed6a7b22e352e9200c87410c7411e5b"} Feb 16 13:38:18 crc kubenswrapper[4812]: I0216 13:38:18.976571 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2kpr" event={"ID":"8cb0d463-0679-4810-a6fa-7e56d77677db","Type":"ContainerStarted","Data":"f20c9b0a9dc0781e3c442be2c6443488af241a00ab582a8d8ce6a794dda7f9cf"} Feb 16 13:38:18 crc kubenswrapper[4812]: I0216 13:38:18.978625 4812 generic.go:334] "Generic (PLEG): container finished" podID="4dcfe737-220e-464b-b4dd-7956ceec99b6" containerID="73466a51d3731b4e012ecf86f63b9983512488cd1ab6a423cd24259d0402e80f" exitCode=0 Feb 16 13:38:18 crc kubenswrapper[4812]: I0216 13:38:18.978666 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4f7gt" event={"ID":"4dcfe737-220e-464b-b4dd-7956ceec99b6","Type":"ContainerDied","Data":"73466a51d3731b4e012ecf86f63b9983512488cd1ab6a423cd24259d0402e80f"} Feb 16 13:38:18 crc kubenswrapper[4812]: I0216 13:38:18.996717 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wspdc" podStartSLOduration=2.596478989 podStartE2EDuration="4.99668726s" podCreationTimestamp="2026-02-16 13:38:14 +0000 UTC" firstStartedPulling="2026-02-16 13:38:15.942176986 +0000 UTC m=+385.006507687" lastFinishedPulling="2026-02-16 13:38:18.342385247 +0000 UTC m=+387.406715958" observedRunningTime="2026-02-16 13:38:18.990771072 +0000 UTC m=+388.055101773" watchObservedRunningTime="2026-02-16 13:38:18.99668726 +0000 UTC m=+388.061017961" Feb 16 13:38:19 crc kubenswrapper[4812]: I0216 13:38:19.985634 4812 generic.go:334] "Generic (PLEG): container finished" podID="8cb0d463-0679-4810-a6fa-7e56d77677db" containerID="f20c9b0a9dc0781e3c442be2c6443488af241a00ab582a8d8ce6a794dda7f9cf" exitCode=0 Feb 16 13:38:19 crc kubenswrapper[4812]: I0216 13:38:19.985733 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2kpr" event={"ID":"8cb0d463-0679-4810-a6fa-7e56d77677db","Type":"ContainerDied","Data":"f20c9b0a9dc0781e3c442be2c6443488af241a00ab582a8d8ce6a794dda7f9cf"} Feb 16 13:38:20 crc kubenswrapper[4812]: I0216 13:38:20.993506 4812 generic.go:334] "Generic (PLEG): container finished" podID="4dcfe737-220e-464b-b4dd-7956ceec99b6" containerID="868a65069e7b7f4e03bb6ff24db9d51ae19f38dc0e05eefe8f495c9378e44336" exitCode=0 Feb 16 13:38:20 crc kubenswrapper[4812]: I0216 13:38:20.993596 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4f7gt" event={"ID":"4dcfe737-220e-464b-b4dd-7956ceec99b6","Type":"ContainerDied","Data":"868a65069e7b7f4e03bb6ff24db9d51ae19f38dc0e05eefe8f495c9378e44336"} Feb 16 13:38:22 crc kubenswrapper[4812]: I0216 13:38:22.001593 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2kpr" event={"ID":"8cb0d463-0679-4810-a6fa-7e56d77677db","Type":"ContainerStarted","Data":"1e8a65f5b656ce3ca6eddf6b8b678a220141735ef8ea059e45bdd88095dd45cf"} Feb 16 13:38:22 crc kubenswrapper[4812]: I0216 13:38:22.003938 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4f7gt" event={"ID":"4dcfe737-220e-464b-b4dd-7956ceec99b6","Type":"ContainerStarted","Data":"d0d086dbb63b1c8b9a539be20df72cc9545f619e0ee041195cb33d308aa4c7ae"} Feb 16 13:38:22 crc kubenswrapper[4812]: I0216 13:38:22.024179 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k2kpr" podStartSLOduration=1.984851444 podStartE2EDuration="5.024162353s" podCreationTimestamp="2026-02-16 13:38:17 +0000 UTC" firstStartedPulling="2026-02-16 13:38:17.968003719 +0000 UTC m=+387.032334420" lastFinishedPulling="2026-02-16 13:38:21.007314628 +0000 UTC m=+390.071645329" observedRunningTime="2026-02-16 13:38:22.019641458 +0000 UTC m=+391.083972159" watchObservedRunningTime="2026-02-16 13:38:22.024162353 +0000 UTC m=+391.088493054" Feb 16 13:38:22 crc kubenswrapper[4812]: I0216 13:38:22.040130 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4f7gt" podStartSLOduration=2.561784487 podStartE2EDuration="5.040109842s" podCreationTimestamp="2026-02-16 13:38:17 +0000 UTC" firstStartedPulling="2026-02-16 13:38:18.980001009 +0000 UTC m=+388.044331710" lastFinishedPulling="2026-02-16 13:38:21.458326364 +0000 UTC m=+390.522657065" observedRunningTime="2026-02-16 13:38:22.039485153 +0000 UTC m=+391.103815864" watchObservedRunningTime="2026-02-16 13:38:22.040109842 +0000 UTC m=+391.104440543" Feb 16 13:38:25 crc kubenswrapper[4812]: I0216 13:38:25.305075 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wspdc" Feb 16 13:38:25 crc kubenswrapper[4812]: I0216 13:38:25.305147 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wspdc" Feb 16 13:38:25 crc kubenswrapper[4812]: I0216 13:38:25.353697 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wspdc" Feb 16 13:38:26 crc kubenswrapper[4812]: I0216 13:38:26.065994 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wspdc" Feb 16 13:38:27 crc kubenswrapper[4812]: I0216 13:38:27.495206 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k2kpr" Feb 16 13:38:27 crc kubenswrapper[4812]: I0216 13:38:27.496598 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k2kpr" Feb 16 13:38:27 crc kubenswrapper[4812]: I0216 13:38:27.540044 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k2kpr" Feb 16 13:38:27 crc kubenswrapper[4812]: I0216 13:38:27.719043 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4f7gt" Feb 16 13:38:27 crc kubenswrapper[4812]: I0216 13:38:27.719104 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4f7gt" Feb 16 13:38:27 crc kubenswrapper[4812]: I0216 13:38:27.760040 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4f7gt" Feb 16 13:38:28 crc kubenswrapper[4812]: I0216 13:38:28.073810 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k2kpr" Feb 16 13:38:28 crc kubenswrapper[4812]: I0216 13:38:28.074322 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4f7gt" Feb 16 13:38:43 crc kubenswrapper[4812]: I0216 13:38:43.982522 4812 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","pod567e2fcc-e342-41e9-a406-4758f7c5551e"] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod567e2fcc-e342-41e9-a406-4758f7c5551e] : Timed out while waiting for systemd to remove kubepods-burstable-pod567e2fcc_e342_41e9_a406_4758f7c5551e.slice" Feb 16 13:38:43 crc kubenswrapper[4812]: E0216 13:38:43.983156 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods burstable pod567e2fcc-e342-41e9-a406-4758f7c5551e] : unable to destroy cgroup paths for cgroup [kubepods burstable pod567e2fcc-e342-41e9-a406-4758f7c5551e] : Timed out while waiting for systemd to remove kubepods-burstable-pod567e2fcc_e342_41e9_a406_4758f7c5551e.slice" pod="openshift-marketplace/certified-operators-gfhfv" podUID="567e2fcc-e342-41e9-a406-4758f7c5551e" Feb 16 13:38:44 crc kubenswrapper[4812]: I0216 13:38:44.121095 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gfhfv" Feb 16 13:38:44 crc kubenswrapper[4812]: I0216 13:38:44.139473 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gfhfv"] Feb 16 13:38:44 crc kubenswrapper[4812]: I0216 13:38:44.142231 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gfhfv"] Feb 16 13:38:44 crc kubenswrapper[4812]: I0216 13:38:44.177047 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-88kx2"] Feb 16 13:38:44 crc kubenswrapper[4812]: I0216 13:38:44.177998 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-88kx2" Feb 16 13:38:44 crc kubenswrapper[4812]: I0216 13:38:44.180426 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 13:38:44 crc kubenswrapper[4812]: I0216 13:38:44.190270 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-88kx2"] Feb 16 13:38:44 crc kubenswrapper[4812]: I0216 13:38:44.351280 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00153ebb-09b0-4de5-82ce-8e71fc35acac-utilities\") pod \"certified-operators-88kx2\" (UID: \"00153ebb-09b0-4de5-82ce-8e71fc35acac\") " pod="openshift-marketplace/certified-operators-88kx2" Feb 16 13:38:44 crc kubenswrapper[4812]: I0216 13:38:44.351470 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngff6\" (UniqueName: \"kubernetes.io/projected/00153ebb-09b0-4de5-82ce-8e71fc35acac-kube-api-access-ngff6\") pod \"certified-operators-88kx2\" (UID: \"00153ebb-09b0-4de5-82ce-8e71fc35acac\") " pod="openshift-marketplace/certified-operators-88kx2" Feb 16 13:38:44 crc kubenswrapper[4812]: I0216 13:38:44.351539 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00153ebb-09b0-4de5-82ce-8e71fc35acac-catalog-content\") pod \"certified-operators-88kx2\" (UID: \"00153ebb-09b0-4de5-82ce-8e71fc35acac\") " pod="openshift-marketplace/certified-operators-88kx2" Feb 16 13:38:44 crc kubenswrapper[4812]: I0216 13:38:44.452970 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngff6\" (UniqueName: \"kubernetes.io/projected/00153ebb-09b0-4de5-82ce-8e71fc35acac-kube-api-access-ngff6\") pod \"certified-operators-88kx2\" (UID: \"00153ebb-09b0-4de5-82ce-8e71fc35acac\") " pod="openshift-marketplace/certified-operators-88kx2" Feb 16 13:38:44 crc kubenswrapper[4812]: I0216 13:38:44.453042 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00153ebb-09b0-4de5-82ce-8e71fc35acac-catalog-content\") pod \"certified-operators-88kx2\" (UID: \"00153ebb-09b0-4de5-82ce-8e71fc35acac\") " pod="openshift-marketplace/certified-operators-88kx2" Feb 16 13:38:44 crc kubenswrapper[4812]: I0216 13:38:44.453142 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00153ebb-09b0-4de5-82ce-8e71fc35acac-utilities\") pod \"certified-operators-88kx2\" (UID: \"00153ebb-09b0-4de5-82ce-8e71fc35acac\") " pod="openshift-marketplace/certified-operators-88kx2" Feb 16 13:38:44 crc kubenswrapper[4812]: I0216 13:38:44.454017 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00153ebb-09b0-4de5-82ce-8e71fc35acac-catalog-content\") pod \"certified-operators-88kx2\" (UID: \"00153ebb-09b0-4de5-82ce-8e71fc35acac\") " pod="openshift-marketplace/certified-operators-88kx2" Feb 16 13:38:44 crc kubenswrapper[4812]: I0216 13:38:44.454077 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00153ebb-09b0-4de5-82ce-8e71fc35acac-utilities\") pod \"certified-operators-88kx2\" (UID: \"00153ebb-09b0-4de5-82ce-8e71fc35acac\") " pod="openshift-marketplace/certified-operators-88kx2" Feb 16 13:38:44 crc kubenswrapper[4812]: I0216 13:38:44.481231 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngff6\" (UniqueName: \"kubernetes.io/projected/00153ebb-09b0-4de5-82ce-8e71fc35acac-kube-api-access-ngff6\") pod \"certified-operators-88kx2\" (UID: \"00153ebb-09b0-4de5-82ce-8e71fc35acac\") " pod="openshift-marketplace/certified-operators-88kx2" Feb 16 13:38:44 crc kubenswrapper[4812]: I0216 13:38:44.505219 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-88kx2" Feb 16 13:38:44 crc kubenswrapper[4812]: I0216 13:38:44.548890 4812 patch_prober.go:28] interesting pod/machine-config-daemon-c6mn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:38:44 crc kubenswrapper[4812]: I0216 13:38:44.548957 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:38:44 crc kubenswrapper[4812]: I0216 13:38:44.549004 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" Feb 16 13:38:44 crc kubenswrapper[4812]: I0216 13:38:44.549837 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0dea0551bdc1dbe8171150e4ea91a5f7a4c6365d605948c214a9ad8e715fdd89"} pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 13:38:44 crc kubenswrapper[4812]: I0216 13:38:44.549919 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" containerID="cri-o://0dea0551bdc1dbe8171150e4ea91a5f7a4c6365d605948c214a9ad8e715fdd89" gracePeriod=600 Feb 16 13:38:44 crc kubenswrapper[4812]: I0216 13:38:44.727361 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-88kx2"] Feb 16 13:38:45 crc kubenswrapper[4812]: I0216 13:38:45.129115 4812 generic.go:334] "Generic (PLEG): container finished" podID="00153ebb-09b0-4de5-82ce-8e71fc35acac" containerID="6a27643e0337243485461b64f7fb3b6a6d7f984ce3ebdb3e2c510217f0b7b359" exitCode=0 Feb 16 13:38:45 crc kubenswrapper[4812]: I0216 13:38:45.130819 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-88kx2" event={"ID":"00153ebb-09b0-4de5-82ce-8e71fc35acac","Type":"ContainerDied","Data":"6a27643e0337243485461b64f7fb3b6a6d7f984ce3ebdb3e2c510217f0b7b359"} Feb 16 13:38:45 crc kubenswrapper[4812]: I0216 13:38:45.131860 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-88kx2" event={"ID":"00153ebb-09b0-4de5-82ce-8e71fc35acac","Type":"ContainerStarted","Data":"1387e1eb3aa6c66bef23baaa6a2b386d48c7c06a1beadc9cd23a95625c222347"} Feb 16 13:38:45 crc kubenswrapper[4812]: I0216 13:38:45.135257 4812 generic.go:334] "Generic (PLEG): container finished" podID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerID="0dea0551bdc1dbe8171150e4ea91a5f7a4c6365d605948c214a9ad8e715fdd89" exitCode=0 Feb 16 13:38:45 crc kubenswrapper[4812]: I0216 13:38:45.135305 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" event={"ID":"3c55e49a-a30d-4950-a690-c33d9f8a31e0","Type":"ContainerDied","Data":"0dea0551bdc1dbe8171150e4ea91a5f7a4c6365d605948c214a9ad8e715fdd89"} Feb 16 13:38:45 crc kubenswrapper[4812]: I0216 13:38:45.135339 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" event={"ID":"3c55e49a-a30d-4950-a690-c33d9f8a31e0","Type":"ContainerStarted","Data":"a72a25e58c4955d9061eb3209cc9f8e59817b6546c4b8aafdb6b583903cc792d"} Feb 16 13:38:45 crc kubenswrapper[4812]: I0216 13:38:45.135356 4812 scope.go:117] "RemoveContainer" containerID="0fe6225f5f96d52b832cfbd8ecb30748dfcd053b9d95a512bfbe0c6c4aa7c0e6" Feb 16 13:38:45 crc kubenswrapper[4812]: I0216 13:38:45.886311 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567e2fcc-e342-41e9-a406-4758f7c5551e" path="/var/lib/kubelet/pods/567e2fcc-e342-41e9-a406-4758f7c5551e/volumes" Feb 16 13:38:47 crc kubenswrapper[4812]: I0216 13:38:47.151519 4812 generic.go:334] "Generic (PLEG): container finished" podID="00153ebb-09b0-4de5-82ce-8e71fc35acac" containerID="d79434a673637a5701d0a683b3ad9f67c5bede26c1c4cd63073a7518ebf303ca" exitCode=0 Feb 16 13:38:47 crc kubenswrapper[4812]: I0216 13:38:47.151591 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-88kx2" event={"ID":"00153ebb-09b0-4de5-82ce-8e71fc35acac","Type":"ContainerDied","Data":"d79434a673637a5701d0a683b3ad9f67c5bede26c1c4cd63073a7518ebf303ca"} Feb 16 13:38:48 crc kubenswrapper[4812]: I0216 13:38:48.160251 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-88kx2" event={"ID":"00153ebb-09b0-4de5-82ce-8e71fc35acac","Type":"ContainerStarted","Data":"7fb660dc8bb53f1734885a3b3f6b2bfb51971904e5b84fa6c4431f09a8794179"} Feb 16 13:38:48 crc kubenswrapper[4812]: I0216 13:38:48.180077 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-88kx2" podStartSLOduration=1.74642736 podStartE2EDuration="4.180057884s" podCreationTimestamp="2026-02-16 13:38:44 +0000 UTC" firstStartedPulling="2026-02-16 13:38:45.13388006 +0000 UTC m=+414.198210761" lastFinishedPulling="2026-02-16 13:38:47.567510574 +0000 UTC m=+416.631841285" observedRunningTime="2026-02-16 13:38:48.176550889 +0000 UTC m=+417.240881600" watchObservedRunningTime="2026-02-16 13:38:48.180057884 +0000 UTC m=+417.244388585" Feb 16 13:38:54 crc kubenswrapper[4812]: I0216 13:38:54.505607 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-88kx2" Feb 16 13:38:54 crc kubenswrapper[4812]: I0216 13:38:54.506178 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-88kx2" Feb 16 13:38:54 crc kubenswrapper[4812]: I0216 13:38:54.554901 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-88kx2" Feb 16 13:38:55 crc kubenswrapper[4812]: I0216 13:38:55.240416 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-88kx2" Feb 16 13:40:44 crc kubenswrapper[4812]: I0216 13:40:44.549239 4812 patch_prober.go:28] interesting pod/machine-config-daemon-c6mn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:40:44 crc kubenswrapper[4812]: I0216 13:40:44.549962 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:41:14 crc kubenswrapper[4812]: I0216 13:41:14.549327 4812 patch_prober.go:28] interesting pod/machine-config-daemon-c6mn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:41:14 crc kubenswrapper[4812]: I0216 13:41:14.551193 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:41:44 crc kubenswrapper[4812]: I0216 13:41:44.549600 4812 patch_prober.go:28] interesting pod/machine-config-daemon-c6mn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:41:44 crc kubenswrapper[4812]: I0216 13:41:44.550630 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:41:44 crc kubenswrapper[4812]: I0216 13:41:44.550709 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" Feb 16 13:41:44 crc kubenswrapper[4812]: I0216 13:41:44.551593 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a72a25e58c4955d9061eb3209cc9f8e59817b6546c4b8aafdb6b583903cc792d"} pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 13:41:44 crc kubenswrapper[4812]: I0216 13:41:44.551665 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" containerID="cri-o://a72a25e58c4955d9061eb3209cc9f8e59817b6546c4b8aafdb6b583903cc792d" gracePeriod=600 Feb 16 13:41:45 crc kubenswrapper[4812]: I0216 13:41:45.670272 4812 generic.go:334] "Generic (PLEG): container finished" podID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerID="a72a25e58c4955d9061eb3209cc9f8e59817b6546c4b8aafdb6b583903cc792d" exitCode=0 Feb 16 13:41:45 crc kubenswrapper[4812]: I0216 13:41:45.670351 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" event={"ID":"3c55e49a-a30d-4950-a690-c33d9f8a31e0","Type":"ContainerDied","Data":"a72a25e58c4955d9061eb3209cc9f8e59817b6546c4b8aafdb6b583903cc792d"} Feb 16 13:41:45 crc kubenswrapper[4812]: I0216 13:41:45.670658 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" event={"ID":"3c55e49a-a30d-4950-a690-c33d9f8a31e0","Type":"ContainerStarted","Data":"69f6102fd067a315bb5fa977a52583563bb8e2109920c634e663dafde3b8d90e"} Feb 16 13:41:45 crc kubenswrapper[4812]: I0216 13:41:45.670680 4812 scope.go:117] "RemoveContainer" containerID="0dea0551bdc1dbe8171150e4ea91a5f7a4c6365d605948c214a9ad8e715fdd89" Feb 16 13:41:52 crc kubenswrapper[4812]: I0216 13:41:52.164187 4812 scope.go:117] "RemoveContainer" containerID="179030cd4122da3ab3b2f06540361d3e3fae9aa53195700682738df7ca9af315" Feb 16 13:43:44 crc kubenswrapper[4812]: I0216 13:43:44.549897 4812 patch_prober.go:28] interesting pod/machine-config-daemon-c6mn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:43:44 crc kubenswrapper[4812]: I0216 13:43:44.550742 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:44:12 crc kubenswrapper[4812]: I0216 13:44:12.045547 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088c849"] Feb 16 13:44:12 crc kubenswrapper[4812]: I0216 13:44:12.047279 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088c849" Feb 16 13:44:12 crc kubenswrapper[4812]: I0216 13:44:12.051303 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 13:44:12 crc kubenswrapper[4812]: I0216 13:44:12.056204 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088c849"] Feb 16 13:44:12 crc kubenswrapper[4812]: I0216 13:44:12.199993 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57xvc\" (UniqueName: \"kubernetes.io/projected/81df20ac-ca53-4b60-8813-b91f69263210-kube-api-access-57xvc\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088c849\" (UID: \"81df20ac-ca53-4b60-8813-b91f69263210\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088c849" Feb 16 13:44:12 crc kubenswrapper[4812]: I0216 13:44:12.200293 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/81df20ac-ca53-4b60-8813-b91f69263210-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088c849\" (UID: \"81df20ac-ca53-4b60-8813-b91f69263210\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088c849" Feb 16 13:44:12 crc kubenswrapper[4812]: I0216 13:44:12.200407 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/81df20ac-ca53-4b60-8813-b91f69263210-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088c849\" (UID: \"81df20ac-ca53-4b60-8813-b91f69263210\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088c849" Feb 16 13:44:12 crc kubenswrapper[4812]: I0216 13:44:12.301724 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/81df20ac-ca53-4b60-8813-b91f69263210-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088c849\" (UID: \"81df20ac-ca53-4b60-8813-b91f69263210\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088c849" Feb 16 13:44:12 crc kubenswrapper[4812]: I0216 13:44:12.301822 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57xvc\" (UniqueName: \"kubernetes.io/projected/81df20ac-ca53-4b60-8813-b91f69263210-kube-api-access-57xvc\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088c849\" (UID: \"81df20ac-ca53-4b60-8813-b91f69263210\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088c849" Feb 16 13:44:12 crc kubenswrapper[4812]: I0216 13:44:12.301911 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/81df20ac-ca53-4b60-8813-b91f69263210-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088c849\" (UID: \"81df20ac-ca53-4b60-8813-b91f69263210\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088c849" Feb 16 13:44:12 crc kubenswrapper[4812]: I0216 13:44:12.302285 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/81df20ac-ca53-4b60-8813-b91f69263210-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088c849\" (UID: \"81df20ac-ca53-4b60-8813-b91f69263210\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088c849" Feb 16 13:44:12 crc kubenswrapper[4812]: I0216 13:44:12.302341 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/81df20ac-ca53-4b60-8813-b91f69263210-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088c849\" (UID: \"81df20ac-ca53-4b60-8813-b91f69263210\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088c849" Feb 16 13:44:12 crc kubenswrapper[4812]: I0216 13:44:12.320038 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57xvc\" (UniqueName: \"kubernetes.io/projected/81df20ac-ca53-4b60-8813-b91f69263210-kube-api-access-57xvc\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088c849\" (UID: \"81df20ac-ca53-4b60-8813-b91f69263210\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088c849" Feb 16 13:44:12 crc kubenswrapper[4812]: I0216 13:44:12.366168 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088c849" Feb 16 13:44:12 crc kubenswrapper[4812]: I0216 13:44:12.596273 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088c849"] Feb 16 13:44:13 crc kubenswrapper[4812]: I0216 13:44:13.422568 4812 generic.go:334] "Generic (PLEG): container finished" podID="81df20ac-ca53-4b60-8813-b91f69263210" containerID="44f7e50f9bd2edbc23723a786c60fa1603c1cde69b6fc6cd670d5d1f85058579" exitCode=0 Feb 16 13:44:13 crc kubenswrapper[4812]: I0216 13:44:13.422613 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088c849" event={"ID":"81df20ac-ca53-4b60-8813-b91f69263210","Type":"ContainerDied","Data":"44f7e50f9bd2edbc23723a786c60fa1603c1cde69b6fc6cd670d5d1f85058579"} Feb 16 13:44:13 crc kubenswrapper[4812]: I0216 13:44:13.422640 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088c849" event={"ID":"81df20ac-ca53-4b60-8813-b91f69263210","Type":"ContainerStarted","Data":"f1b602631903360696546bd35a777bfa31664bb493fd22196c32d240e47a50e9"} Feb 16 13:44:13 crc kubenswrapper[4812]: I0216 13:44:13.423926 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 13:44:14 crc kubenswrapper[4812]: I0216 13:44:14.549642 4812 patch_prober.go:28] interesting pod/machine-config-daemon-c6mn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:44:14 crc kubenswrapper[4812]: I0216 13:44:14.550085 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:44:15 crc kubenswrapper[4812]: I0216 13:44:15.433589 4812 generic.go:334] "Generic (PLEG): container finished" podID="81df20ac-ca53-4b60-8813-b91f69263210" containerID="c5590fd625a9c2a796b98180b290f0b38e512326787d2ecfc3155d1bab9ec6c8" exitCode=0 Feb 16 13:44:15 crc kubenswrapper[4812]: I0216 13:44:15.433767 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088c849" event={"ID":"81df20ac-ca53-4b60-8813-b91f69263210","Type":"ContainerDied","Data":"c5590fd625a9c2a796b98180b290f0b38e512326787d2ecfc3155d1bab9ec6c8"} Feb 16 13:44:16 crc kubenswrapper[4812]: I0216 13:44:16.440036 4812 generic.go:334] "Generic (PLEG): container finished" podID="81df20ac-ca53-4b60-8813-b91f69263210" containerID="a520e7824192fcdacd9742ce44172a4a27382115657b77f45767e7b7086ad034" exitCode=0 Feb 16 13:44:16 crc kubenswrapper[4812]: I0216 13:44:16.440082 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088c849" event={"ID":"81df20ac-ca53-4b60-8813-b91f69263210","Type":"ContainerDied","Data":"a520e7824192fcdacd9742ce44172a4a27382115657b77f45767e7b7086ad034"} Feb 16 13:44:17 crc kubenswrapper[4812]: I0216 13:44:17.634301 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088c849" Feb 16 13:44:17 crc kubenswrapper[4812]: I0216 13:44:17.762125 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/81df20ac-ca53-4b60-8813-b91f69263210-util\") pod \"81df20ac-ca53-4b60-8813-b91f69263210\" (UID: \"81df20ac-ca53-4b60-8813-b91f69263210\") " Feb 16 13:44:17 crc kubenswrapper[4812]: I0216 13:44:17.762528 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/81df20ac-ca53-4b60-8813-b91f69263210-bundle\") pod \"81df20ac-ca53-4b60-8813-b91f69263210\" (UID: \"81df20ac-ca53-4b60-8813-b91f69263210\") " Feb 16 13:44:17 crc kubenswrapper[4812]: I0216 13:44:17.762570 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57xvc\" (UniqueName: \"kubernetes.io/projected/81df20ac-ca53-4b60-8813-b91f69263210-kube-api-access-57xvc\") pod \"81df20ac-ca53-4b60-8813-b91f69263210\" (UID: \"81df20ac-ca53-4b60-8813-b91f69263210\") " Feb 16 13:44:17 crc kubenswrapper[4812]: I0216 13:44:17.764776 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81df20ac-ca53-4b60-8813-b91f69263210-bundle" (OuterVolumeSpecName: "bundle") pod "81df20ac-ca53-4b60-8813-b91f69263210" (UID: "81df20ac-ca53-4b60-8813-b91f69263210"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:44:17 crc kubenswrapper[4812]: I0216 13:44:17.773666 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81df20ac-ca53-4b60-8813-b91f69263210-kube-api-access-57xvc" (OuterVolumeSpecName: "kube-api-access-57xvc") pod "81df20ac-ca53-4b60-8813-b91f69263210" (UID: "81df20ac-ca53-4b60-8813-b91f69263210"). InnerVolumeSpecName "kube-api-access-57xvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:44:17 crc kubenswrapper[4812]: I0216 13:44:17.779536 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81df20ac-ca53-4b60-8813-b91f69263210-util" (OuterVolumeSpecName: "util") pod "81df20ac-ca53-4b60-8813-b91f69263210" (UID: "81df20ac-ca53-4b60-8813-b91f69263210"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:44:17 crc kubenswrapper[4812]: I0216 13:44:17.864308 4812 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/81df20ac-ca53-4b60-8813-b91f69263210-util\") on node \"crc\" DevicePath \"\"" Feb 16 13:44:17 crc kubenswrapper[4812]: I0216 13:44:17.864356 4812 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/81df20ac-ca53-4b60-8813-b91f69263210-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:44:17 crc kubenswrapper[4812]: I0216 13:44:17.864371 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57xvc\" (UniqueName: \"kubernetes.io/projected/81df20ac-ca53-4b60-8813-b91f69263210-kube-api-access-57xvc\") on node \"crc\" DevicePath \"\"" Feb 16 13:44:18 crc kubenswrapper[4812]: I0216 13:44:18.451811 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088c849" event={"ID":"81df20ac-ca53-4b60-8813-b91f69263210","Type":"ContainerDied","Data":"f1b602631903360696546bd35a777bfa31664bb493fd22196c32d240e47a50e9"} Feb 16 13:44:18 crc kubenswrapper[4812]: I0216 13:44:18.451855 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1b602631903360696546bd35a777bfa31664bb493fd22196c32d240e47a50e9" Feb 16 13:44:18 crc kubenswrapper[4812]: I0216 13:44:18.451921 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088c849" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.156025 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pzksg"] Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.156827 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="nbdb" containerID="cri-o://83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5" gracePeriod=30 Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.156965 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="sbdb" containerID="cri-o://f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d" gracePeriod=30 Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.157010 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822" gracePeriod=30 Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.157052 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="northd" containerID="cri-o://c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056" gracePeriod=30 Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.157046 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="kube-rbac-proxy-node" containerID="cri-o://04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38" gracePeriod=30 Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.156790 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="ovn-controller" containerID="cri-o://73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a" gracePeriod=30 Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.157148 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="ovn-acl-logging" containerID="cri-o://880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b" gracePeriod=30 Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.275857 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="ovnkube-controller" containerID="cri-o://3fa6eac03530fa7a720c0109e70502bc334e31e1ff8f9ac66e3f768751a44622" gracePeriod=30 Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.477363 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2hhp5_934e533e-cc26-4770-af67-3dbcaa0dab5b/kube-multus/2.log" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.477918 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2hhp5_934e533e-cc26-4770-af67-3dbcaa0dab5b/kube-multus/1.log" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.477969 4812 generic.go:334] "Generic (PLEG): container finished" podID="934e533e-cc26-4770-af67-3dbcaa0dab5b" containerID="2d41f8ea13f87efbf94b6b39515e60a7f967c77a0430c1428f73fb0fd196cb4b" exitCode=2 Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.478021 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2hhp5" event={"ID":"934e533e-cc26-4770-af67-3dbcaa0dab5b","Type":"ContainerDied","Data":"2d41f8ea13f87efbf94b6b39515e60a7f967c77a0430c1428f73fb0fd196cb4b"} Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.478062 4812 scope.go:117] "RemoveContainer" containerID="63cd21781b359c2f631527e3e4e2649264ee4e64c5d3cc9842fb9106017565c9" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.478610 4812 scope.go:117] "RemoveContainer" containerID="2d41f8ea13f87efbf94b6b39515e60a7f967c77a0430c1428f73fb0fd196cb4b" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.485270 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pzksg_a67ca714-af04-4a76-8a28-54d47f66b1fa/ovnkube-controller/3.log" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.488176 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pzksg_a67ca714-af04-4a76-8a28-54d47f66b1fa/ovn-acl-logging/0.log" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.488949 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pzksg_a67ca714-af04-4a76-8a28-54d47f66b1fa/ovn-controller/0.log" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.489325 4812 generic.go:334] "Generic (PLEG): container finished" podID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerID="3fa6eac03530fa7a720c0109e70502bc334e31e1ff8f9ac66e3f768751a44622" exitCode=0 Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.489354 4812 generic.go:334] "Generic (PLEG): container finished" podID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerID="f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d" exitCode=0 Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.489365 4812 generic.go:334] "Generic (PLEG): container finished" podID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerID="a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822" exitCode=0 Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.489373 4812 generic.go:334] "Generic (PLEG): container finished" podID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerID="04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38" exitCode=0 Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.489383 4812 generic.go:334] "Generic (PLEG): container finished" podID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerID="880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b" exitCode=143 Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.489393 4812 generic.go:334] "Generic (PLEG): container finished" podID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerID="73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a" exitCode=143 Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.489414 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" event={"ID":"a67ca714-af04-4a76-8a28-54d47f66b1fa","Type":"ContainerDied","Data":"3fa6eac03530fa7a720c0109e70502bc334e31e1ff8f9ac66e3f768751a44622"} Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.489456 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" event={"ID":"a67ca714-af04-4a76-8a28-54d47f66b1fa","Type":"ContainerDied","Data":"f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d"} Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.489470 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" event={"ID":"a67ca714-af04-4a76-8a28-54d47f66b1fa","Type":"ContainerDied","Data":"a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822"} Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.489482 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" event={"ID":"a67ca714-af04-4a76-8a28-54d47f66b1fa","Type":"ContainerDied","Data":"04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38"} Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.489493 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" event={"ID":"a67ca714-af04-4a76-8a28-54d47f66b1fa","Type":"ContainerDied","Data":"880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b"} Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.489505 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" event={"ID":"a67ca714-af04-4a76-8a28-54d47f66b1fa","Type":"ContainerDied","Data":"73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a"} Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.515492 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pzksg_a67ca714-af04-4a76-8a28-54d47f66b1fa/ovnkube-controller/3.log" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.515526 4812 scope.go:117] "RemoveContainer" containerID="b9d4167a71f0c30ceeb980a74964baf5bdf78713153d1f3cac40fc4c79cbf69d" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.518498 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pzksg_a67ca714-af04-4a76-8a28-54d47f66b1fa/ovn-acl-logging/0.log" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.519598 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pzksg_a67ca714-af04-4a76-8a28-54d47f66b1fa/ovn-controller/0.log" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.520027 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.571937 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fj6r6"] Feb 16 13:44:23 crc kubenswrapper[4812]: E0216 13:44:23.572386 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="ovnkube-controller" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.572403 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="ovnkube-controller" Feb 16 13:44:23 crc kubenswrapper[4812]: E0216 13:44:23.572410 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="ovn-controller" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.572415 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="ovn-controller" Feb 16 13:44:23 crc kubenswrapper[4812]: E0216 13:44:23.572426 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="northd" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.572432 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="northd" Feb 16 13:44:23 crc kubenswrapper[4812]: E0216 13:44:23.572483 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="ovnkube-controller" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.572491 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="ovnkube-controller" Feb 16 13:44:23 crc kubenswrapper[4812]: E0216 13:44:23.572498 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="ovn-acl-logging" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.572504 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="ovn-acl-logging" Feb 16 13:44:23 crc kubenswrapper[4812]: E0216 13:44:23.572514 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="kube-rbac-proxy-node" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.572520 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="kube-rbac-proxy-node" Feb 16 13:44:23 crc kubenswrapper[4812]: E0216 13:44:23.572529 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81df20ac-ca53-4b60-8813-b91f69263210" containerName="extract" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.572535 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="81df20ac-ca53-4b60-8813-b91f69263210" containerName="extract" Feb 16 13:44:23 crc kubenswrapper[4812]: E0216 13:44:23.572543 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="ovnkube-controller" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.572548 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="ovnkube-controller" Feb 16 13:44:23 crc kubenswrapper[4812]: E0216 13:44:23.572555 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="sbdb" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.572560 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="sbdb" Feb 16 13:44:23 crc kubenswrapper[4812]: E0216 13:44:23.572570 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="kubecfg-setup" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.572576 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="kubecfg-setup" Feb 16 13:44:23 crc kubenswrapper[4812]: E0216 13:44:23.572585 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81df20ac-ca53-4b60-8813-b91f69263210" containerName="util" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.572592 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="81df20ac-ca53-4b60-8813-b91f69263210" containerName="util" Feb 16 13:44:23 crc kubenswrapper[4812]: E0216 13:44:23.572598 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81df20ac-ca53-4b60-8813-b91f69263210" containerName="pull" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.572603 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="81df20ac-ca53-4b60-8813-b91f69263210" containerName="pull" Feb 16 13:44:23 crc kubenswrapper[4812]: E0216 13:44:23.572610 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="nbdb" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.572616 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="nbdb" Feb 16 13:44:23 crc kubenswrapper[4812]: E0216 13:44:23.572624 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.572630 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.572713 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="ovnkube-controller" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.572723 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.572730 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="ovnkube-controller" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.572738 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="northd" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.572745 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="ovn-acl-logging" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.572753 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="ovnkube-controller" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.572760 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="ovn-controller" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.572766 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="kube-rbac-proxy-node" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.572773 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="nbdb" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.572781 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="81df20ac-ca53-4b60-8813-b91f69263210" containerName="extract" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.572787 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="sbdb" Feb 16 13:44:23 crc kubenswrapper[4812]: E0216 13:44:23.572876 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="ovnkube-controller" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.572884 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="ovnkube-controller" Feb 16 13:44:23 crc kubenswrapper[4812]: E0216 13:44:23.572893 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="ovnkube-controller" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.572899 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="ovnkube-controller" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.572976 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="ovnkube-controller" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.572999 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerName="ovnkube-controller" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.574535 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.637744 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-host-run-ovn-kubernetes\") pod \"a67ca714-af04-4a76-8a28-54d47f66b1fa\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.637898 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "a67ca714-af04-4a76-8a28-54d47f66b1fa" (UID: "a67ca714-af04-4a76-8a28-54d47f66b1fa"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.637943 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-run-systemd\") pod \"a67ca714-af04-4a76-8a28-54d47f66b1fa\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.638049 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a67ca714-af04-4a76-8a28-54d47f66b1fa-ovnkube-config\") pod \"a67ca714-af04-4a76-8a28-54d47f66b1fa\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.638078 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-var-lib-openvswitch\") pod \"a67ca714-af04-4a76-8a28-54d47f66b1fa\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.638102 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-systemd-units\") pod \"a67ca714-af04-4a76-8a28-54d47f66b1fa\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.638126 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-host-cni-bin\") pod \"a67ca714-af04-4a76-8a28-54d47f66b1fa\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.638190 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-host-cni-netd\") pod \"a67ca714-af04-4a76-8a28-54d47f66b1fa\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.638218 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-host-kubelet\") pod \"a67ca714-af04-4a76-8a28-54d47f66b1fa\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.638238 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-host-var-lib-cni-networks-ovn-kubernetes\") pod \"a67ca714-af04-4a76-8a28-54d47f66b1fa\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.638257 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-etc-openvswitch\") pod \"a67ca714-af04-4a76-8a28-54d47f66b1fa\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.638276 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "a67ca714-af04-4a76-8a28-54d47f66b1fa" (UID: "a67ca714-af04-4a76-8a28-54d47f66b1fa"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.638301 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-run-ovn\") pod \"a67ca714-af04-4a76-8a28-54d47f66b1fa\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.638321 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-node-log\") pod \"a67ca714-af04-4a76-8a28-54d47f66b1fa\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.638341 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a67ca714-af04-4a76-8a28-54d47f66b1fa-ovn-node-metrics-cert\") pod \"a67ca714-af04-4a76-8a28-54d47f66b1fa\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.638362 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg2hw\" (UniqueName: \"kubernetes.io/projected/a67ca714-af04-4a76-8a28-54d47f66b1fa-kube-api-access-tg2hw\") pod \"a67ca714-af04-4a76-8a28-54d47f66b1fa\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.638378 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-host-slash\") pod \"a67ca714-af04-4a76-8a28-54d47f66b1fa\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.638411 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a67ca714-af04-4a76-8a28-54d47f66b1fa-ovnkube-script-lib\") pod \"a67ca714-af04-4a76-8a28-54d47f66b1fa\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.638335 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "a67ca714-af04-4a76-8a28-54d47f66b1fa" (UID: "a67ca714-af04-4a76-8a28-54d47f66b1fa"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.638404 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "a67ca714-af04-4a76-8a28-54d47f66b1fa" (UID: "a67ca714-af04-4a76-8a28-54d47f66b1fa"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.638410 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "a67ca714-af04-4a76-8a28-54d47f66b1fa" (UID: "a67ca714-af04-4a76-8a28-54d47f66b1fa"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.638464 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a67ca714-af04-4a76-8a28-54d47f66b1fa-env-overrides\") pod \"a67ca714-af04-4a76-8a28-54d47f66b1fa\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.638364 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "a67ca714-af04-4a76-8a28-54d47f66b1fa" (UID: "a67ca714-af04-4a76-8a28-54d47f66b1fa"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.638377 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "a67ca714-af04-4a76-8a28-54d47f66b1fa" (UID: "a67ca714-af04-4a76-8a28-54d47f66b1fa"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.638390 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-node-log" (OuterVolumeSpecName: "node-log") pod "a67ca714-af04-4a76-8a28-54d47f66b1fa" (UID: "a67ca714-af04-4a76-8a28-54d47f66b1fa"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.638524 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a67ca714-af04-4a76-8a28-54d47f66b1fa-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "a67ca714-af04-4a76-8a28-54d47f66b1fa" (UID: "a67ca714-af04-4a76-8a28-54d47f66b1fa"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.638401 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "a67ca714-af04-4a76-8a28-54d47f66b1fa" (UID: "a67ca714-af04-4a76-8a28-54d47f66b1fa"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.638417 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "a67ca714-af04-4a76-8a28-54d47f66b1fa" (UID: "a67ca714-af04-4a76-8a28-54d47f66b1fa"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.638483 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-host-slash" (OuterVolumeSpecName: "host-slash") pod "a67ca714-af04-4a76-8a28-54d47f66b1fa" (UID: "a67ca714-af04-4a76-8a28-54d47f66b1fa"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.638552 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-log-socket" (OuterVolumeSpecName: "log-socket") pod "a67ca714-af04-4a76-8a28-54d47f66b1fa" (UID: "a67ca714-af04-4a76-8a28-54d47f66b1fa"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.638499 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-log-socket\") pod \"a67ca714-af04-4a76-8a28-54d47f66b1fa\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.638616 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-host-run-netns\") pod \"a67ca714-af04-4a76-8a28-54d47f66b1fa\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.638643 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-run-openvswitch\") pod \"a67ca714-af04-4a76-8a28-54d47f66b1fa\" (UID: \"a67ca714-af04-4a76-8a28-54d47f66b1fa\") " Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.638824 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7718bc10-4095-424c-8c59-702c3be17b88-ovnkube-script-lib\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.638854 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7718bc10-4095-424c-8c59-702c3be17b88-ovnkube-config\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.639039 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-node-log\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.639062 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-log-socket\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.639088 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7718bc10-4095-424c-8c59-702c3be17b88-ovn-node-metrics-cert\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.639106 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-run-ovn\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.639107 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a67ca714-af04-4a76-8a28-54d47f66b1fa-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "a67ca714-af04-4a76-8a28-54d47f66b1fa" (UID: "a67ca714-af04-4a76-8a28-54d47f66b1fa"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.639126 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-host-run-ovn-kubernetes\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.639160 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "a67ca714-af04-4a76-8a28-54d47f66b1fa" (UID: "a67ca714-af04-4a76-8a28-54d47f66b1fa"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.639183 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "a67ca714-af04-4a76-8a28-54d47f66b1fa" (UID: "a67ca714-af04-4a76-8a28-54d47f66b1fa"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.639192 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-host-kubelet\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.639232 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-host-cni-netd\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.639288 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a67ca714-af04-4a76-8a28-54d47f66b1fa-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "a67ca714-af04-4a76-8a28-54d47f66b1fa" (UID: "a67ca714-af04-4a76-8a28-54d47f66b1fa"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.639329 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-etc-openvswitch\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.639435 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjcb2\" (UniqueName: \"kubernetes.io/projected/7718bc10-4095-424c-8c59-702c3be17b88-kube-api-access-wjcb2\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.639495 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-systemd-units\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.639535 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7718bc10-4095-424c-8c59-702c3be17b88-env-overrides\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.639591 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-var-lib-openvswitch\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.639611 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-run-openvswitch\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.639631 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-host-slash\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.639669 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.639692 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-host-run-netns\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.639718 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-host-cni-bin\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.639762 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-run-systemd\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.639853 4812 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.639869 4812 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.639884 4812 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.639926 4812 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.639938 4812 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.639950 4812 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-node-log\") on node \"crc\" DevicePath \"\"" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.639960 4812 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-host-slash\") on node \"crc\" DevicePath \"\"" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.639996 4812 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a67ca714-af04-4a76-8a28-54d47f66b1fa-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.640010 4812 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a67ca714-af04-4a76-8a28-54d47f66b1fa-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.640020 4812 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-log-socket\") on node \"crc\" DevicePath \"\"" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.640031 4812 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.640041 4812 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.640079 4812 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.640092 4812 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a67ca714-af04-4a76-8a28-54d47f66b1fa-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.640103 4812 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.640114 4812 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.640124 4812 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.643354 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a67ca714-af04-4a76-8a28-54d47f66b1fa-kube-api-access-tg2hw" (OuterVolumeSpecName: "kube-api-access-tg2hw") pod "a67ca714-af04-4a76-8a28-54d47f66b1fa" (UID: "a67ca714-af04-4a76-8a28-54d47f66b1fa"). InnerVolumeSpecName "kube-api-access-tg2hw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.643377 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a67ca714-af04-4a76-8a28-54d47f66b1fa-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "a67ca714-af04-4a76-8a28-54d47f66b1fa" (UID: "a67ca714-af04-4a76-8a28-54d47f66b1fa"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.656531 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "a67ca714-af04-4a76-8a28-54d47f66b1fa" (UID: "a67ca714-af04-4a76-8a28-54d47f66b1fa"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.741348 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjcb2\" (UniqueName: \"kubernetes.io/projected/7718bc10-4095-424c-8c59-702c3be17b88-kube-api-access-wjcb2\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.741703 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-systemd-units\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.741755 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-systemd-units\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.741830 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7718bc10-4095-424c-8c59-702c3be17b88-env-overrides\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.741894 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-var-lib-openvswitch\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.741951 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-run-openvswitch\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.741993 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-var-lib-openvswitch\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.742059 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-host-slash\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.742107 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-run-openvswitch\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.742169 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.742213 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-host-slash\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.742296 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.742374 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-host-run-netns\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.742463 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-host-cni-bin\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.742541 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-run-systemd\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.742596 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-run-systemd\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.742514 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-host-cni-bin\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.742493 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-host-run-netns\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.742494 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7718bc10-4095-424c-8c59-702c3be17b88-env-overrides\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.742604 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7718bc10-4095-424c-8c59-702c3be17b88-ovnkube-config\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.742773 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7718bc10-4095-424c-8c59-702c3be17b88-ovnkube-script-lib\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.742829 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-node-log\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.742883 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-log-socket\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.742941 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7718bc10-4095-424c-8c59-702c3be17b88-ovn-node-metrics-cert\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.743004 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-run-ovn\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.743060 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-host-run-ovn-kubernetes\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.743128 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-host-kubelet\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.743182 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-host-cni-netd\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.743237 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-etc-openvswitch\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.743307 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7718bc10-4095-424c-8c59-702c3be17b88-ovnkube-config\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.743387 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-host-kubelet\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.743422 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-run-ovn\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.743485 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-host-cni-netd\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.743485 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-host-run-ovn-kubernetes\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.743604 4812 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a67ca714-af04-4a76-8a28-54d47f66b1fa-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.743664 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-node-log\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.743719 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-log-socket\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.743773 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7718bc10-4095-424c-8c59-702c3be17b88-etc-openvswitch\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.743992 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7718bc10-4095-424c-8c59-702c3be17b88-ovnkube-script-lib\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.744030 4812 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a67ca714-af04-4a76-8a28-54d47f66b1fa-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.744047 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tg2hw\" (UniqueName: \"kubernetes.io/projected/a67ca714-af04-4a76-8a28-54d47f66b1fa-kube-api-access-tg2hw\") on node \"crc\" DevicePath \"\"" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.749957 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7718bc10-4095-424c-8c59-702c3be17b88-ovn-node-metrics-cert\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.760591 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjcb2\" (UniqueName: \"kubernetes.io/projected/7718bc10-4095-424c-8c59-702c3be17b88-kube-api-access-wjcb2\") pod \"ovnkube-node-fj6r6\" (UID: \"7718bc10-4095-424c-8c59-702c3be17b88\") " pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:23 crc kubenswrapper[4812]: I0216 13:44:23.886476 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.496318 4812 generic.go:334] "Generic (PLEG): container finished" podID="7718bc10-4095-424c-8c59-702c3be17b88" containerID="e08afeccd342690c9ecd4c1ed88ce2ab63cdb07dcb62a4f4df353d05c2194122" exitCode=0 Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.496419 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" event={"ID":"7718bc10-4095-424c-8c59-702c3be17b88","Type":"ContainerDied","Data":"e08afeccd342690c9ecd4c1ed88ce2ab63cdb07dcb62a4f4df353d05c2194122"} Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.496736 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" event={"ID":"7718bc10-4095-424c-8c59-702c3be17b88","Type":"ContainerStarted","Data":"f675342b677695a3265f84bf026d72da766e04c8442c23c10261e05f0e88b784"} Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.498809 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2hhp5_934e533e-cc26-4770-af67-3dbcaa0dab5b/kube-multus/2.log" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.498924 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2hhp5" event={"ID":"934e533e-cc26-4770-af67-3dbcaa0dab5b","Type":"ContainerStarted","Data":"7db699ccbe47bf11420f701e011318e2d5e9bd36befa7e67482b36c74912c90d"} Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.510103 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pzksg_a67ca714-af04-4a76-8a28-54d47f66b1fa/ovn-acl-logging/0.log" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.511147 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pzksg_a67ca714-af04-4a76-8a28-54d47f66b1fa/ovn-controller/0.log" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.511581 4812 generic.go:334] "Generic (PLEG): container finished" podID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerID="83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5" exitCode=0 Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.511671 4812 generic.go:334] "Generic (PLEG): container finished" podID="a67ca714-af04-4a76-8a28-54d47f66b1fa" containerID="c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056" exitCode=0 Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.511767 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" event={"ID":"a67ca714-af04-4a76-8a28-54d47f66b1fa","Type":"ContainerDied","Data":"83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5"} Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.511851 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" event={"ID":"a67ca714-af04-4a76-8a28-54d47f66b1fa","Type":"ContainerDied","Data":"c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056"} Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.511922 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" event={"ID":"a67ca714-af04-4a76-8a28-54d47f66b1fa","Type":"ContainerDied","Data":"58b335f78768348993a51c94c1f0eda0952cebb019a44c6c0880f865550b4a2d"} Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.512009 4812 scope.go:117] "RemoveContainer" containerID="3fa6eac03530fa7a720c0109e70502bc334e31e1ff8f9ac66e3f768751a44622" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.512244 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pzksg" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.565360 4812 scope.go:117] "RemoveContainer" containerID="f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.603711 4812 scope.go:117] "RemoveContainer" containerID="83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.625122 4812 scope.go:117] "RemoveContainer" containerID="c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.647689 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pzksg"] Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.647903 4812 scope.go:117] "RemoveContainer" containerID="a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.652163 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pzksg"] Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.672478 4812 scope.go:117] "RemoveContainer" containerID="04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.700212 4812 scope.go:117] "RemoveContainer" containerID="880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.734028 4812 scope.go:117] "RemoveContainer" containerID="73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.772718 4812 scope.go:117] "RemoveContainer" containerID="7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.792461 4812 scope.go:117] "RemoveContainer" containerID="3fa6eac03530fa7a720c0109e70502bc334e31e1ff8f9ac66e3f768751a44622" Feb 16 13:44:24 crc kubenswrapper[4812]: E0216 13:44:24.792917 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fa6eac03530fa7a720c0109e70502bc334e31e1ff8f9ac66e3f768751a44622\": container with ID starting with 3fa6eac03530fa7a720c0109e70502bc334e31e1ff8f9ac66e3f768751a44622 not found: ID does not exist" containerID="3fa6eac03530fa7a720c0109e70502bc334e31e1ff8f9ac66e3f768751a44622" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.792955 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fa6eac03530fa7a720c0109e70502bc334e31e1ff8f9ac66e3f768751a44622"} err="failed to get container status \"3fa6eac03530fa7a720c0109e70502bc334e31e1ff8f9ac66e3f768751a44622\": rpc error: code = NotFound desc = could not find container \"3fa6eac03530fa7a720c0109e70502bc334e31e1ff8f9ac66e3f768751a44622\": container with ID starting with 3fa6eac03530fa7a720c0109e70502bc334e31e1ff8f9ac66e3f768751a44622 not found: ID does not exist" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.792980 4812 scope.go:117] "RemoveContainer" containerID="f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d" Feb 16 13:44:24 crc kubenswrapper[4812]: E0216 13:44:24.793268 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d\": container with ID starting with f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d not found: ID does not exist" containerID="f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.793319 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d"} err="failed to get container status \"f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d\": rpc error: code = NotFound desc = could not find container \"f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d\": container with ID starting with f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d not found: ID does not exist" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.793351 4812 scope.go:117] "RemoveContainer" containerID="83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5" Feb 16 13:44:24 crc kubenswrapper[4812]: E0216 13:44:24.793752 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5\": container with ID starting with 83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5 not found: ID does not exist" containerID="83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.793780 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5"} err="failed to get container status \"83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5\": rpc error: code = NotFound desc = could not find container \"83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5\": container with ID starting with 83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5 not found: ID does not exist" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.793798 4812 scope.go:117] "RemoveContainer" containerID="c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056" Feb 16 13:44:24 crc kubenswrapper[4812]: E0216 13:44:24.794034 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056\": container with ID starting with c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056 not found: ID does not exist" containerID="c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.794064 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056"} err="failed to get container status \"c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056\": rpc error: code = NotFound desc = could not find container \"c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056\": container with ID starting with c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056 not found: ID does not exist" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.794081 4812 scope.go:117] "RemoveContainer" containerID="a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822" Feb 16 13:44:24 crc kubenswrapper[4812]: E0216 13:44:24.794256 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822\": container with ID starting with a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822 not found: ID does not exist" containerID="a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.794285 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822"} err="failed to get container status \"a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822\": rpc error: code = NotFound desc = could not find container \"a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822\": container with ID starting with a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822 not found: ID does not exist" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.794305 4812 scope.go:117] "RemoveContainer" containerID="04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38" Feb 16 13:44:24 crc kubenswrapper[4812]: E0216 13:44:24.794520 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38\": container with ID starting with 04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38 not found: ID does not exist" containerID="04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.794551 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38"} err="failed to get container status \"04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38\": rpc error: code = NotFound desc = could not find container \"04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38\": container with ID starting with 04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38 not found: ID does not exist" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.794570 4812 scope.go:117] "RemoveContainer" containerID="880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b" Feb 16 13:44:24 crc kubenswrapper[4812]: E0216 13:44:24.794760 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b\": container with ID starting with 880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b not found: ID does not exist" containerID="880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.794789 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b"} err="failed to get container status \"880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b\": rpc error: code = NotFound desc = could not find container \"880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b\": container with ID starting with 880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b not found: ID does not exist" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.794809 4812 scope.go:117] "RemoveContainer" containerID="73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a" Feb 16 13:44:24 crc kubenswrapper[4812]: E0216 13:44:24.794996 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a\": container with ID starting with 73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a not found: ID does not exist" containerID="73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.795023 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a"} err="failed to get container status \"73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a\": rpc error: code = NotFound desc = could not find container \"73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a\": container with ID starting with 73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a not found: ID does not exist" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.795040 4812 scope.go:117] "RemoveContainer" containerID="7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657" Feb 16 13:44:24 crc kubenswrapper[4812]: E0216 13:44:24.795241 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\": container with ID starting with 7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657 not found: ID does not exist" containerID="7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.795265 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657"} err="failed to get container status \"7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\": rpc error: code = NotFound desc = could not find container \"7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\": container with ID starting with 7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657 not found: ID does not exist" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.795281 4812 scope.go:117] "RemoveContainer" containerID="3fa6eac03530fa7a720c0109e70502bc334e31e1ff8f9ac66e3f768751a44622" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.795505 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fa6eac03530fa7a720c0109e70502bc334e31e1ff8f9ac66e3f768751a44622"} err="failed to get container status \"3fa6eac03530fa7a720c0109e70502bc334e31e1ff8f9ac66e3f768751a44622\": rpc error: code = NotFound desc = could not find container \"3fa6eac03530fa7a720c0109e70502bc334e31e1ff8f9ac66e3f768751a44622\": container with ID starting with 3fa6eac03530fa7a720c0109e70502bc334e31e1ff8f9ac66e3f768751a44622 not found: ID does not exist" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.795535 4812 scope.go:117] "RemoveContainer" containerID="f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.795761 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d"} err="failed to get container status \"f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d\": rpc error: code = NotFound desc = could not find container \"f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d\": container with ID starting with f967773767d0f28997f5799af2e16abb3b8d2f6c466df919a8dddf5d753db14d not found: ID does not exist" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.795784 4812 scope.go:117] "RemoveContainer" containerID="83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.795968 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5"} err="failed to get container status \"83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5\": rpc error: code = NotFound desc = could not find container \"83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5\": container with ID starting with 83fc0c282946d9109d39eb00ccac8103e8f7614d041d9d8a9b2dfd838bf836c5 not found: ID does not exist" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.795992 4812 scope.go:117] "RemoveContainer" containerID="c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.796161 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056"} err="failed to get container status \"c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056\": rpc error: code = NotFound desc = could not find container \"c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056\": container with ID starting with c7478ed8cc5c7d617d29af34546bd4bcb33df331e3c5ea70893e24b582480056 not found: ID does not exist" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.796201 4812 scope.go:117] "RemoveContainer" containerID="a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.796388 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822"} err="failed to get container status \"a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822\": rpc error: code = NotFound desc = could not find container \"a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822\": container with ID starting with a85970121b06b7a499753ddc666e42638f306208bf8865a988b34b8edfa99822 not found: ID does not exist" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.796412 4812 scope.go:117] "RemoveContainer" containerID="04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.796693 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38"} err="failed to get container status \"04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38\": rpc error: code = NotFound desc = could not find container \"04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38\": container with ID starting with 04bc3941314d7a48a75a311e507c9ae7ed2f1e822205726ff4d7b030bacbbf38 not found: ID does not exist" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.796715 4812 scope.go:117] "RemoveContainer" containerID="880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.796919 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b"} err="failed to get container status \"880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b\": rpc error: code = NotFound desc = could not find container \"880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b\": container with ID starting with 880849d940c076be5f8745fc85b9d6a673b9020bb84d855e984c1ce7d9185e6b not found: ID does not exist" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.796946 4812 scope.go:117] "RemoveContainer" containerID="73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.797169 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a"} err="failed to get container status \"73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a\": rpc error: code = NotFound desc = could not find container \"73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a\": container with ID starting with 73ed32edb7a7f0f9fd8c7516081719b4f898be82ddaf591929ffca02b757f90a not found: ID does not exist" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.797193 4812 scope.go:117] "RemoveContainer" containerID="7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657" Feb 16 13:44:24 crc kubenswrapper[4812]: I0216 13:44:24.797406 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657"} err="failed to get container status \"7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\": rpc error: code = NotFound desc = could not find container \"7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657\": container with ID starting with 7a0d8b663b66e20d5efa0fdb9f1a3c4744808e878dd4ec6baeb927ff2613d657 not found: ID does not exist" Feb 16 13:44:25 crc kubenswrapper[4812]: I0216 13:44:25.520580 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" event={"ID":"7718bc10-4095-424c-8c59-702c3be17b88","Type":"ContainerStarted","Data":"638677f92748410cc69e49f8121bcc63f85d5db8b29a681a41786d9cec0f559c"} Feb 16 13:44:25 crc kubenswrapper[4812]: I0216 13:44:25.521698 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" event={"ID":"7718bc10-4095-424c-8c59-702c3be17b88","Type":"ContainerStarted","Data":"86aca9f8dda02df85bd4ed9cae6933ea64b40350fa0df50efe7c2089649e16d3"} Feb 16 13:44:25 crc kubenswrapper[4812]: I0216 13:44:25.521761 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" event={"ID":"7718bc10-4095-424c-8c59-702c3be17b88","Type":"ContainerStarted","Data":"ac61946b6bddeea6f568cbc6ee97488d062cda040a977f45f01a97885c242bcd"} Feb 16 13:44:25 crc kubenswrapper[4812]: I0216 13:44:25.521812 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" event={"ID":"7718bc10-4095-424c-8c59-702c3be17b88","Type":"ContainerStarted","Data":"bb7c01e4aa182cdd9a2ac1a345a0cb1e1cd3c94676db761f35e16e9e1b884d10"} Feb 16 13:44:25 crc kubenswrapper[4812]: I0216 13:44:25.521879 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" event={"ID":"7718bc10-4095-424c-8c59-702c3be17b88","Type":"ContainerStarted","Data":"b4fdce34c12205b780096aa4e37e4fc66a9f19103cd0336f900d42f0a25205be"} Feb 16 13:44:25 crc kubenswrapper[4812]: I0216 13:44:25.521932 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" event={"ID":"7718bc10-4095-424c-8c59-702c3be17b88","Type":"ContainerStarted","Data":"2f060dad2cdb35c4e7b76aca2b70e90002e43fc460fb90ce663c6e74b181da06"} Feb 16 13:44:25 crc kubenswrapper[4812]: I0216 13:44:25.892175 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a67ca714-af04-4a76-8a28-54d47f66b1fa" path="/var/lib/kubelet/pods/a67ca714-af04-4a76-8a28-54d47f66b1fa/volumes" Feb 16 13:44:27 crc kubenswrapper[4812]: I0216 13:44:27.255421 4812 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 13:44:28 crc kubenswrapper[4812]: I0216 13:44:28.539924 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" event={"ID":"7718bc10-4095-424c-8c59-702c3be17b88","Type":"ContainerStarted","Data":"65b72f107e08aa197f66607b305ec366c4e884286ced1ecdae26fc00a4ff17c9"} Feb 16 13:44:29 crc kubenswrapper[4812]: I0216 13:44:29.502208 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-q86d9"] Feb 16 13:44:29 crc kubenswrapper[4812]: I0216 13:44:29.503741 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q86d9" Feb 16 13:44:29 crc kubenswrapper[4812]: I0216 13:44:29.506101 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 16 13:44:29 crc kubenswrapper[4812]: I0216 13:44:29.506233 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-z76r6" Feb 16 13:44:29 crc kubenswrapper[4812]: I0216 13:44:29.507169 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 16 13:44:29 crc kubenswrapper[4812]: I0216 13:44:29.626632 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-2w7pj"] Feb 16 13:44:29 crc kubenswrapper[4812]: I0216 13:44:29.627264 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-2w7pj" Feb 16 13:44:29 crc kubenswrapper[4812]: I0216 13:44:29.631072 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 16 13:44:29 crc kubenswrapper[4812]: I0216 13:44:29.631548 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-mrb9j" Feb 16 13:44:29 crc kubenswrapper[4812]: I0216 13:44:29.647722 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-x8xvj"] Feb 16 13:44:29 crc kubenswrapper[4812]: I0216 13:44:29.648512 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-x8xvj" Feb 16 13:44:29 crc kubenswrapper[4812]: I0216 13:44:29.658308 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgz29\" (UniqueName: \"kubernetes.io/projected/9e3d83dd-a02e-46b8-8cb0-e3840347e5ad-kube-api-access-bgz29\") pod \"obo-prometheus-operator-68bc856cb9-q86d9\" (UID: \"9e3d83dd-a02e-46b8-8cb0-e3840347e5ad\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q86d9" Feb 16 13:44:29 crc kubenswrapper[4812]: I0216 13:44:29.759634 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/40d22a6e-3db9-43c6-9ca4-560ef32ca2a1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5d55f77766-x8xvj\" (UID: \"40d22a6e-3db9-43c6-9ca4-560ef32ca2a1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-x8xvj" Feb 16 13:44:29 crc kubenswrapper[4812]: I0216 13:44:29.759691 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cc311aeb-05a8-4b4d-abe0-c35db319d48a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5d55f77766-2w7pj\" (UID: \"cc311aeb-05a8-4b4d-abe0-c35db319d48a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-2w7pj" Feb 16 13:44:29 crc kubenswrapper[4812]: I0216 13:44:29.759748 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/40d22a6e-3db9-43c6-9ca4-560ef32ca2a1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5d55f77766-x8xvj\" (UID: \"40d22a6e-3db9-43c6-9ca4-560ef32ca2a1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-x8xvj" Feb 16 13:44:29 crc kubenswrapper[4812]: I0216 13:44:29.759782 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgz29\" (UniqueName: \"kubernetes.io/projected/9e3d83dd-a02e-46b8-8cb0-e3840347e5ad-kube-api-access-bgz29\") pod \"obo-prometheus-operator-68bc856cb9-q86d9\" (UID: \"9e3d83dd-a02e-46b8-8cb0-e3840347e5ad\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q86d9" Feb 16 13:44:29 crc kubenswrapper[4812]: I0216 13:44:29.759816 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cc311aeb-05a8-4b4d-abe0-c35db319d48a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5d55f77766-2w7pj\" (UID: \"cc311aeb-05a8-4b4d-abe0-c35db319d48a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-2w7pj" Feb 16 13:44:29 crc kubenswrapper[4812]: I0216 13:44:29.783202 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgz29\" (UniqueName: \"kubernetes.io/projected/9e3d83dd-a02e-46b8-8cb0-e3840347e5ad-kube-api-access-bgz29\") pod \"obo-prometheus-operator-68bc856cb9-q86d9\" (UID: \"9e3d83dd-a02e-46b8-8cb0-e3840347e5ad\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q86d9" Feb 16 13:44:29 crc kubenswrapper[4812]: I0216 13:44:29.821756 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q86d9" Feb 16 13:44:29 crc kubenswrapper[4812]: E0216 13:44:29.856234 4812 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-q86d9_openshift-operators_9e3d83dd-a02e-46b8-8cb0-e3840347e5ad_0(56c51d86259085ab4c2169683fe16338bab7d2722a69471c3a2f71f1e7b26123): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 13:44:29 crc kubenswrapper[4812]: E0216 13:44:29.856302 4812 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-q86d9_openshift-operators_9e3d83dd-a02e-46b8-8cb0-e3840347e5ad_0(56c51d86259085ab4c2169683fe16338bab7d2722a69471c3a2f71f1e7b26123): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q86d9" Feb 16 13:44:29 crc kubenswrapper[4812]: E0216 13:44:29.856330 4812 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-q86d9_openshift-operators_9e3d83dd-a02e-46b8-8cb0-e3840347e5ad_0(56c51d86259085ab4c2169683fe16338bab7d2722a69471c3a2f71f1e7b26123): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q86d9" Feb 16 13:44:29 crc kubenswrapper[4812]: E0216 13:44:29.856385 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-q86d9_openshift-operators(9e3d83dd-a02e-46b8-8cb0-e3840347e5ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-q86d9_openshift-operators(9e3d83dd-a02e-46b8-8cb0-e3840347e5ad)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-q86d9_openshift-operators_9e3d83dd-a02e-46b8-8cb0-e3840347e5ad_0(56c51d86259085ab4c2169683fe16338bab7d2722a69471c3a2f71f1e7b26123): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q86d9" podUID="9e3d83dd-a02e-46b8-8cb0-e3840347e5ad" Feb 16 13:44:29 crc kubenswrapper[4812]: I0216 13:44:29.861285 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/40d22a6e-3db9-43c6-9ca4-560ef32ca2a1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5d55f77766-x8xvj\" (UID: \"40d22a6e-3db9-43c6-9ca4-560ef32ca2a1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-x8xvj" Feb 16 13:44:29 crc kubenswrapper[4812]: I0216 13:44:29.861345 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cc311aeb-05a8-4b4d-abe0-c35db319d48a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5d55f77766-2w7pj\" (UID: \"cc311aeb-05a8-4b4d-abe0-c35db319d48a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-2w7pj" Feb 16 13:44:29 crc kubenswrapper[4812]: I0216 13:44:29.861379 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/40d22a6e-3db9-43c6-9ca4-560ef32ca2a1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5d55f77766-x8xvj\" (UID: \"40d22a6e-3db9-43c6-9ca4-560ef32ca2a1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-x8xvj" Feb 16 13:44:29 crc kubenswrapper[4812]: I0216 13:44:29.861415 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cc311aeb-05a8-4b4d-abe0-c35db319d48a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5d55f77766-2w7pj\" (UID: \"cc311aeb-05a8-4b4d-abe0-c35db319d48a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-2w7pj" Feb 16 13:44:29 crc kubenswrapper[4812]: I0216 13:44:29.867338 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cc311aeb-05a8-4b4d-abe0-c35db319d48a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5d55f77766-2w7pj\" (UID: \"cc311aeb-05a8-4b4d-abe0-c35db319d48a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-2w7pj" Feb 16 13:44:29 crc kubenswrapper[4812]: I0216 13:44:29.867992 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cc311aeb-05a8-4b4d-abe0-c35db319d48a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5d55f77766-2w7pj\" (UID: \"cc311aeb-05a8-4b4d-abe0-c35db319d48a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-2w7pj" Feb 16 13:44:29 crc kubenswrapper[4812]: I0216 13:44:29.868145 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/40d22a6e-3db9-43c6-9ca4-560ef32ca2a1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5d55f77766-x8xvj\" (UID: \"40d22a6e-3db9-43c6-9ca4-560ef32ca2a1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-x8xvj" Feb 16 13:44:29 crc kubenswrapper[4812]: I0216 13:44:29.874164 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/40d22a6e-3db9-43c6-9ca4-560ef32ca2a1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5d55f77766-x8xvj\" (UID: \"40d22a6e-3db9-43c6-9ca4-560ef32ca2a1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-x8xvj" Feb 16 13:44:29 crc kubenswrapper[4812]: I0216 13:44:29.876788 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-9kvgm"] Feb 16 13:44:29 crc kubenswrapper[4812]: I0216 13:44:29.877668 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9kvgm" Feb 16 13:44:29 crc kubenswrapper[4812]: I0216 13:44:29.880241 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-mqkwk" Feb 16 13:44:29 crc kubenswrapper[4812]: I0216 13:44:29.880432 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 16 13:44:29 crc kubenswrapper[4812]: I0216 13:44:29.943948 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-2w7pj" Feb 16 13:44:29 crc kubenswrapper[4812]: I0216 13:44:29.963120 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/90f1d72f-119e-4971-bfad-a3210f07e473-observability-operator-tls\") pod \"observability-operator-59bdc8b94-9kvgm\" (UID: \"90f1d72f-119e-4971-bfad-a3210f07e473\") " pod="openshift-operators/observability-operator-59bdc8b94-9kvgm" Feb 16 13:44:29 crc kubenswrapper[4812]: I0216 13:44:29.963209 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zhsh\" (UniqueName: \"kubernetes.io/projected/90f1d72f-119e-4971-bfad-a3210f07e473-kube-api-access-5zhsh\") pod \"observability-operator-59bdc8b94-9kvgm\" (UID: \"90f1d72f-119e-4971-bfad-a3210f07e473\") " pod="openshift-operators/observability-operator-59bdc8b94-9kvgm" Feb 16 13:44:29 crc kubenswrapper[4812]: E0216 13:44:29.966076 4812 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5d55f77766-2w7pj_openshift-operators_cc311aeb-05a8-4b4d-abe0-c35db319d48a_0(e478253730c2307c5cff0e5541b8d79fbe02884c94ca0ad8d1db42d34dac37f9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 13:44:29 crc kubenswrapper[4812]: E0216 13:44:29.966342 4812 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5d55f77766-2w7pj_openshift-operators_cc311aeb-05a8-4b4d-abe0-c35db319d48a_0(e478253730c2307c5cff0e5541b8d79fbe02884c94ca0ad8d1db42d34dac37f9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-2w7pj" Feb 16 13:44:29 crc kubenswrapper[4812]: E0216 13:44:29.966372 4812 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5d55f77766-2w7pj_openshift-operators_cc311aeb-05a8-4b4d-abe0-c35db319d48a_0(e478253730c2307c5cff0e5541b8d79fbe02884c94ca0ad8d1db42d34dac37f9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-2w7pj" Feb 16 13:44:29 crc kubenswrapper[4812]: E0216 13:44:29.966432 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5d55f77766-2w7pj_openshift-operators(cc311aeb-05a8-4b4d-abe0-c35db319d48a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5d55f77766-2w7pj_openshift-operators(cc311aeb-05a8-4b4d-abe0-c35db319d48a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5d55f77766-2w7pj_openshift-operators_cc311aeb-05a8-4b4d-abe0-c35db319d48a_0(e478253730c2307c5cff0e5541b8d79fbe02884c94ca0ad8d1db42d34dac37f9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-2w7pj" podUID="cc311aeb-05a8-4b4d-abe0-c35db319d48a" Feb 16 13:44:29 crc kubenswrapper[4812]: I0216 13:44:29.966821 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-x8xvj" Feb 16 13:44:29 crc kubenswrapper[4812]: E0216 13:44:29.989998 4812 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5d55f77766-x8xvj_openshift-operators_40d22a6e-3db9-43c6-9ca4-560ef32ca2a1_0(acaffe15cfe09e74c1f7062b67c09ae84bbb1fb8fcaf836ee3feaf10c1c1a36b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 13:44:29 crc kubenswrapper[4812]: E0216 13:44:29.990070 4812 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5d55f77766-x8xvj_openshift-operators_40d22a6e-3db9-43c6-9ca4-560ef32ca2a1_0(acaffe15cfe09e74c1f7062b67c09ae84bbb1fb8fcaf836ee3feaf10c1c1a36b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-x8xvj" Feb 16 13:44:29 crc kubenswrapper[4812]: E0216 13:44:29.990102 4812 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5d55f77766-x8xvj_openshift-operators_40d22a6e-3db9-43c6-9ca4-560ef32ca2a1_0(acaffe15cfe09e74c1f7062b67c09ae84bbb1fb8fcaf836ee3feaf10c1c1a36b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-x8xvj" Feb 16 13:44:29 crc kubenswrapper[4812]: E0216 13:44:29.990165 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5d55f77766-x8xvj_openshift-operators(40d22a6e-3db9-43c6-9ca4-560ef32ca2a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5d55f77766-x8xvj_openshift-operators(40d22a6e-3db9-43c6-9ca4-560ef32ca2a1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5d55f77766-x8xvj_openshift-operators_40d22a6e-3db9-43c6-9ca4-560ef32ca2a1_0(acaffe15cfe09e74c1f7062b67c09ae84bbb1fb8fcaf836ee3feaf10c1c1a36b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-x8xvj" podUID="40d22a6e-3db9-43c6-9ca4-560ef32ca2a1" Feb 16 13:44:30 crc kubenswrapper[4812]: I0216 13:44:30.044331 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-9fzvk"] Feb 16 13:44:30 crc kubenswrapper[4812]: I0216 13:44:30.045123 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-9fzvk" Feb 16 13:44:30 crc kubenswrapper[4812]: I0216 13:44:30.047404 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-zhcx5" Feb 16 13:44:30 crc kubenswrapper[4812]: I0216 13:44:30.064397 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/90f1d72f-119e-4971-bfad-a3210f07e473-observability-operator-tls\") pod \"observability-operator-59bdc8b94-9kvgm\" (UID: \"90f1d72f-119e-4971-bfad-a3210f07e473\") " pod="openshift-operators/observability-operator-59bdc8b94-9kvgm" Feb 16 13:44:30 crc kubenswrapper[4812]: I0216 13:44:30.064464 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zhsh\" (UniqueName: \"kubernetes.io/projected/90f1d72f-119e-4971-bfad-a3210f07e473-kube-api-access-5zhsh\") pod \"observability-operator-59bdc8b94-9kvgm\" (UID: \"90f1d72f-119e-4971-bfad-a3210f07e473\") " pod="openshift-operators/observability-operator-59bdc8b94-9kvgm" Feb 16 13:44:30 crc kubenswrapper[4812]: I0216 13:44:30.068268 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/90f1d72f-119e-4971-bfad-a3210f07e473-observability-operator-tls\") pod \"observability-operator-59bdc8b94-9kvgm\" (UID: \"90f1d72f-119e-4971-bfad-a3210f07e473\") " pod="openshift-operators/observability-operator-59bdc8b94-9kvgm" Feb 16 13:44:30 crc kubenswrapper[4812]: I0216 13:44:30.084500 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zhsh\" (UniqueName: \"kubernetes.io/projected/90f1d72f-119e-4971-bfad-a3210f07e473-kube-api-access-5zhsh\") pod \"observability-operator-59bdc8b94-9kvgm\" (UID: \"90f1d72f-119e-4971-bfad-a3210f07e473\") " pod="openshift-operators/observability-operator-59bdc8b94-9kvgm" Feb 16 13:44:30 crc kubenswrapper[4812]: I0216 13:44:30.165307 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a975b82f-9342-4bf8-812a-0d2188aeef74-openshift-service-ca\") pod \"perses-operator-5bf474d74f-9fzvk\" (UID: \"a975b82f-9342-4bf8-812a-0d2188aeef74\") " pod="openshift-operators/perses-operator-5bf474d74f-9fzvk" Feb 16 13:44:30 crc kubenswrapper[4812]: I0216 13:44:30.165381 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5nrt\" (UniqueName: \"kubernetes.io/projected/a975b82f-9342-4bf8-812a-0d2188aeef74-kube-api-access-c5nrt\") pod \"perses-operator-5bf474d74f-9fzvk\" (UID: \"a975b82f-9342-4bf8-812a-0d2188aeef74\") " pod="openshift-operators/perses-operator-5bf474d74f-9fzvk" Feb 16 13:44:30 crc kubenswrapper[4812]: I0216 13:44:30.208820 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9kvgm" Feb 16 13:44:30 crc kubenswrapper[4812]: E0216 13:44:30.233641 4812 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9kvgm_openshift-operators_90f1d72f-119e-4971-bfad-a3210f07e473_0(8e86dd0afc7250ec22929c8d968a9a2e8007cdea4a9d716f5e57f2d56ee2fa91): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 13:44:30 crc kubenswrapper[4812]: E0216 13:44:30.233726 4812 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9kvgm_openshift-operators_90f1d72f-119e-4971-bfad-a3210f07e473_0(8e86dd0afc7250ec22929c8d968a9a2e8007cdea4a9d716f5e57f2d56ee2fa91): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-9kvgm" Feb 16 13:44:30 crc kubenswrapper[4812]: E0216 13:44:30.233751 4812 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9kvgm_openshift-operators_90f1d72f-119e-4971-bfad-a3210f07e473_0(8e86dd0afc7250ec22929c8d968a9a2e8007cdea4a9d716f5e57f2d56ee2fa91): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-9kvgm" Feb 16 13:44:30 crc kubenswrapper[4812]: E0216 13:44:30.233806 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-9kvgm_openshift-operators(90f1d72f-119e-4971-bfad-a3210f07e473)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-9kvgm_openshift-operators(90f1d72f-119e-4971-bfad-a3210f07e473)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9kvgm_openshift-operators_90f1d72f-119e-4971-bfad-a3210f07e473_0(8e86dd0afc7250ec22929c8d968a9a2e8007cdea4a9d716f5e57f2d56ee2fa91): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-9kvgm" podUID="90f1d72f-119e-4971-bfad-a3210f07e473" Feb 16 13:44:30 crc kubenswrapper[4812]: I0216 13:44:30.266170 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a975b82f-9342-4bf8-812a-0d2188aeef74-openshift-service-ca\") pod \"perses-operator-5bf474d74f-9fzvk\" (UID: \"a975b82f-9342-4bf8-812a-0d2188aeef74\") " pod="openshift-operators/perses-operator-5bf474d74f-9fzvk" Feb 16 13:44:30 crc kubenswrapper[4812]: I0216 13:44:30.266267 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5nrt\" (UniqueName: \"kubernetes.io/projected/a975b82f-9342-4bf8-812a-0d2188aeef74-kube-api-access-c5nrt\") pod \"perses-operator-5bf474d74f-9fzvk\" (UID: \"a975b82f-9342-4bf8-812a-0d2188aeef74\") " pod="openshift-operators/perses-operator-5bf474d74f-9fzvk" Feb 16 13:44:30 crc kubenswrapper[4812]: I0216 13:44:30.267269 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a975b82f-9342-4bf8-812a-0d2188aeef74-openshift-service-ca\") pod \"perses-operator-5bf474d74f-9fzvk\" (UID: \"a975b82f-9342-4bf8-812a-0d2188aeef74\") " pod="openshift-operators/perses-operator-5bf474d74f-9fzvk" Feb 16 13:44:30 crc kubenswrapper[4812]: I0216 13:44:30.322255 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5nrt\" (UniqueName: \"kubernetes.io/projected/a975b82f-9342-4bf8-812a-0d2188aeef74-kube-api-access-c5nrt\") pod \"perses-operator-5bf474d74f-9fzvk\" (UID: \"a975b82f-9342-4bf8-812a-0d2188aeef74\") " pod="openshift-operators/perses-operator-5bf474d74f-9fzvk" Feb 16 13:44:30 crc kubenswrapper[4812]: I0216 13:44:30.362630 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-9fzvk" Feb 16 13:44:30 crc kubenswrapper[4812]: E0216 13:44:30.398494 4812 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-9fzvk_openshift-operators_a975b82f-9342-4bf8-812a-0d2188aeef74_0(99ee5c8e171498e0af5016d4f0b7dbf4339c760be255ffff39f12d4b0ac368ca): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 13:44:30 crc kubenswrapper[4812]: E0216 13:44:30.398637 4812 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-9fzvk_openshift-operators_a975b82f-9342-4bf8-812a-0d2188aeef74_0(99ee5c8e171498e0af5016d4f0b7dbf4339c760be255ffff39f12d4b0ac368ca): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-9fzvk" Feb 16 13:44:30 crc kubenswrapper[4812]: E0216 13:44:30.398707 4812 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-9fzvk_openshift-operators_a975b82f-9342-4bf8-812a-0d2188aeef74_0(99ee5c8e171498e0af5016d4f0b7dbf4339c760be255ffff39f12d4b0ac368ca): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-9fzvk" Feb 16 13:44:30 crc kubenswrapper[4812]: E0216 13:44:30.398800 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-9fzvk_openshift-operators(a975b82f-9342-4bf8-812a-0d2188aeef74)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-9fzvk_openshift-operators(a975b82f-9342-4bf8-812a-0d2188aeef74)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-9fzvk_openshift-operators_a975b82f-9342-4bf8-812a-0d2188aeef74_0(99ee5c8e171498e0af5016d4f0b7dbf4339c760be255ffff39f12d4b0ac368ca): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-9fzvk" podUID="a975b82f-9342-4bf8-812a-0d2188aeef74" Feb 16 13:44:30 crc kubenswrapper[4812]: I0216 13:44:30.553285 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" event={"ID":"7718bc10-4095-424c-8c59-702c3be17b88","Type":"ContainerStarted","Data":"d34ac9fa64240eae03fc884fceb19e82f86221f822fa377c9d48f53e2672a088"} Feb 16 13:44:30 crc kubenswrapper[4812]: I0216 13:44:30.554560 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:30 crc kubenswrapper[4812]: I0216 13:44:30.578089 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:30 crc kubenswrapper[4812]: I0216 13:44:30.584091 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" podStartSLOduration=7.584076958 podStartE2EDuration="7.584076958s" podCreationTimestamp="2026-02-16 13:44:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:44:30.58274153 +0000 UTC m=+759.647072261" watchObservedRunningTime="2026-02-16 13:44:30.584076958 +0000 UTC m=+759.648407659" Feb 16 13:44:31 crc kubenswrapper[4812]: I0216 13:44:31.557771 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:31 crc kubenswrapper[4812]: I0216 13:44:31.557817 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:31 crc kubenswrapper[4812]: I0216 13:44:31.600315 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:32 crc kubenswrapper[4812]: I0216 13:44:32.801660 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-9fzvk"] Feb 16 13:44:32 crc kubenswrapper[4812]: I0216 13:44:32.802312 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-9fzvk" Feb 16 13:44:32 crc kubenswrapper[4812]: I0216 13:44:32.802756 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-9fzvk" Feb 16 13:44:32 crc kubenswrapper[4812]: I0216 13:44:32.806327 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-q86d9"] Feb 16 13:44:32 crc kubenswrapper[4812]: I0216 13:44:32.806433 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q86d9" Feb 16 13:44:32 crc kubenswrapper[4812]: I0216 13:44:32.806817 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q86d9" Feb 16 13:44:32 crc kubenswrapper[4812]: I0216 13:44:32.834513 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-x8xvj"] Feb 16 13:44:32 crc kubenswrapper[4812]: I0216 13:44:32.834628 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-x8xvj" Feb 16 13:44:32 crc kubenswrapper[4812]: I0216 13:44:32.835005 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-x8xvj" Feb 16 13:44:32 crc kubenswrapper[4812]: I0216 13:44:32.838112 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-2w7pj"] Feb 16 13:44:32 crc kubenswrapper[4812]: I0216 13:44:32.838201 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-2w7pj" Feb 16 13:44:32 crc kubenswrapper[4812]: I0216 13:44:32.838548 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-2w7pj" Feb 16 13:44:32 crc kubenswrapper[4812]: E0216 13:44:32.924457 4812 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-9fzvk_openshift-operators_a975b82f-9342-4bf8-812a-0d2188aeef74_0(f161f3d4c4a254fa8cef7ba53acca73ece83587ecaba9ec8dd3995ac5a07a5cb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 13:44:32 crc kubenswrapper[4812]: E0216 13:44:32.924540 4812 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-9fzvk_openshift-operators_a975b82f-9342-4bf8-812a-0d2188aeef74_0(f161f3d4c4a254fa8cef7ba53acca73ece83587ecaba9ec8dd3995ac5a07a5cb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-9fzvk" Feb 16 13:44:32 crc kubenswrapper[4812]: E0216 13:44:32.924571 4812 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-9fzvk_openshift-operators_a975b82f-9342-4bf8-812a-0d2188aeef74_0(f161f3d4c4a254fa8cef7ba53acca73ece83587ecaba9ec8dd3995ac5a07a5cb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-9fzvk" Feb 16 13:44:32 crc kubenswrapper[4812]: E0216 13:44:32.924635 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-9fzvk_openshift-operators(a975b82f-9342-4bf8-812a-0d2188aeef74)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-9fzvk_openshift-operators(a975b82f-9342-4bf8-812a-0d2188aeef74)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-9fzvk_openshift-operators_a975b82f-9342-4bf8-812a-0d2188aeef74_0(f161f3d4c4a254fa8cef7ba53acca73ece83587ecaba9ec8dd3995ac5a07a5cb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-9fzvk" podUID="a975b82f-9342-4bf8-812a-0d2188aeef74" Feb 16 13:44:32 crc kubenswrapper[4812]: I0216 13:44:32.928837 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-9kvgm"] Feb 16 13:44:32 crc kubenswrapper[4812]: I0216 13:44:32.928953 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9kvgm" Feb 16 13:44:32 crc kubenswrapper[4812]: I0216 13:44:32.929316 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9kvgm" Feb 16 13:44:32 crc kubenswrapper[4812]: E0216 13:44:32.952112 4812 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5d55f77766-x8xvj_openshift-operators_40d22a6e-3db9-43c6-9ca4-560ef32ca2a1_0(9b64923661f5fd739afcf72faa014e7780be72c88bf30e7cb2a31b5556194511): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 13:44:32 crc kubenswrapper[4812]: E0216 13:44:32.952180 4812 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5d55f77766-x8xvj_openshift-operators_40d22a6e-3db9-43c6-9ca4-560ef32ca2a1_0(9b64923661f5fd739afcf72faa014e7780be72c88bf30e7cb2a31b5556194511): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-x8xvj" Feb 16 13:44:32 crc kubenswrapper[4812]: E0216 13:44:32.952202 4812 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5d55f77766-x8xvj_openshift-operators_40d22a6e-3db9-43c6-9ca4-560ef32ca2a1_0(9b64923661f5fd739afcf72faa014e7780be72c88bf30e7cb2a31b5556194511): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-x8xvj" Feb 16 13:44:32 crc kubenswrapper[4812]: E0216 13:44:32.952252 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5d55f77766-x8xvj_openshift-operators(40d22a6e-3db9-43c6-9ca4-560ef32ca2a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5d55f77766-x8xvj_openshift-operators(40d22a6e-3db9-43c6-9ca4-560ef32ca2a1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5d55f77766-x8xvj_openshift-operators_40d22a6e-3db9-43c6-9ca4-560ef32ca2a1_0(9b64923661f5fd739afcf72faa014e7780be72c88bf30e7cb2a31b5556194511): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-x8xvj" podUID="40d22a6e-3db9-43c6-9ca4-560ef32ca2a1" Feb 16 13:44:32 crc kubenswrapper[4812]: E0216 13:44:32.961815 4812 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5d55f77766-2w7pj_openshift-operators_cc311aeb-05a8-4b4d-abe0-c35db319d48a_0(ff57df0b103f0b09d494781d25737544684faf781626db15d78695e36562514a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 13:44:32 crc kubenswrapper[4812]: E0216 13:44:32.961889 4812 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5d55f77766-2w7pj_openshift-operators_cc311aeb-05a8-4b4d-abe0-c35db319d48a_0(ff57df0b103f0b09d494781d25737544684faf781626db15d78695e36562514a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-2w7pj" Feb 16 13:44:32 crc kubenswrapper[4812]: E0216 13:44:32.961913 4812 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5d55f77766-2w7pj_openshift-operators_cc311aeb-05a8-4b4d-abe0-c35db319d48a_0(ff57df0b103f0b09d494781d25737544684faf781626db15d78695e36562514a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-2w7pj" Feb 16 13:44:32 crc kubenswrapper[4812]: E0216 13:44:32.961965 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5d55f77766-2w7pj_openshift-operators(cc311aeb-05a8-4b4d-abe0-c35db319d48a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5d55f77766-2w7pj_openshift-operators(cc311aeb-05a8-4b4d-abe0-c35db319d48a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5d55f77766-2w7pj_openshift-operators_cc311aeb-05a8-4b4d-abe0-c35db319d48a_0(ff57df0b103f0b09d494781d25737544684faf781626db15d78695e36562514a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-2w7pj" podUID="cc311aeb-05a8-4b4d-abe0-c35db319d48a" Feb 16 13:44:32 crc kubenswrapper[4812]: E0216 13:44:32.962591 4812 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-q86d9_openshift-operators_9e3d83dd-a02e-46b8-8cb0-e3840347e5ad_0(0ff6197673e0eaa209c0ea8e4a8fbdbec1e023216666f9186b0bad8245541069): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 13:44:32 crc kubenswrapper[4812]: E0216 13:44:32.962622 4812 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-q86d9_openshift-operators_9e3d83dd-a02e-46b8-8cb0-e3840347e5ad_0(0ff6197673e0eaa209c0ea8e4a8fbdbec1e023216666f9186b0bad8245541069): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q86d9" Feb 16 13:44:32 crc kubenswrapper[4812]: E0216 13:44:32.962636 4812 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-q86d9_openshift-operators_9e3d83dd-a02e-46b8-8cb0-e3840347e5ad_0(0ff6197673e0eaa209c0ea8e4a8fbdbec1e023216666f9186b0bad8245541069): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q86d9" Feb 16 13:44:32 crc kubenswrapper[4812]: E0216 13:44:32.962667 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-q86d9_openshift-operators(9e3d83dd-a02e-46b8-8cb0-e3840347e5ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-q86d9_openshift-operators(9e3d83dd-a02e-46b8-8cb0-e3840347e5ad)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-q86d9_openshift-operators_9e3d83dd-a02e-46b8-8cb0-e3840347e5ad_0(0ff6197673e0eaa209c0ea8e4a8fbdbec1e023216666f9186b0bad8245541069): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q86d9" podUID="9e3d83dd-a02e-46b8-8cb0-e3840347e5ad" Feb 16 13:44:32 crc kubenswrapper[4812]: E0216 13:44:32.984372 4812 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9kvgm_openshift-operators_90f1d72f-119e-4971-bfad-a3210f07e473_0(b336cc0dc883e778981588997a87ea2ea4a080971610d2485c5317f30dcc65ef): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 13:44:32 crc kubenswrapper[4812]: E0216 13:44:32.984466 4812 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9kvgm_openshift-operators_90f1d72f-119e-4971-bfad-a3210f07e473_0(b336cc0dc883e778981588997a87ea2ea4a080971610d2485c5317f30dcc65ef): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-9kvgm" Feb 16 13:44:32 crc kubenswrapper[4812]: E0216 13:44:32.984486 4812 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9kvgm_openshift-operators_90f1d72f-119e-4971-bfad-a3210f07e473_0(b336cc0dc883e778981588997a87ea2ea4a080971610d2485c5317f30dcc65ef): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-9kvgm" Feb 16 13:44:32 crc kubenswrapper[4812]: E0216 13:44:32.984520 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-9kvgm_openshift-operators(90f1d72f-119e-4971-bfad-a3210f07e473)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-9kvgm_openshift-operators(90f1d72f-119e-4971-bfad-a3210f07e473)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9kvgm_openshift-operators_90f1d72f-119e-4971-bfad-a3210f07e473_0(b336cc0dc883e778981588997a87ea2ea4a080971610d2485c5317f30dcc65ef): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-9kvgm" podUID="90f1d72f-119e-4971-bfad-a3210f07e473" Feb 16 13:44:43 crc kubenswrapper[4812]: I0216 13:44:43.878686 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-9fzvk" Feb 16 13:44:43 crc kubenswrapper[4812]: I0216 13:44:43.878785 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-2w7pj" Feb 16 13:44:43 crc kubenswrapper[4812]: I0216 13:44:43.879288 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-2w7pj" Feb 16 13:44:43 crc kubenswrapper[4812]: I0216 13:44:43.879290 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-9fzvk" Feb 16 13:44:44 crc kubenswrapper[4812]: I0216 13:44:44.342952 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-2w7pj"] Feb 16 13:44:44 crc kubenswrapper[4812]: I0216 13:44:44.443623 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-9fzvk"] Feb 16 13:44:44 crc kubenswrapper[4812]: I0216 13:44:44.548998 4812 patch_prober.go:28] interesting pod/machine-config-daemon-c6mn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:44:44 crc kubenswrapper[4812]: I0216 13:44:44.549054 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:44:44 crc kubenswrapper[4812]: I0216 13:44:44.549104 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" Feb 16 13:44:44 crc kubenswrapper[4812]: I0216 13:44:44.549857 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"69f6102fd067a315bb5fa977a52583563bb8e2109920c634e663dafde3b8d90e"} pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 13:44:44 crc kubenswrapper[4812]: I0216 13:44:44.549925 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" containerID="cri-o://69f6102fd067a315bb5fa977a52583563bb8e2109920c634e663dafde3b8d90e" gracePeriod=600 Feb 16 13:44:44 crc kubenswrapper[4812]: I0216 13:44:44.735299 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-2w7pj" event={"ID":"cc311aeb-05a8-4b4d-abe0-c35db319d48a","Type":"ContainerStarted","Data":"caad7621f5bbdcde0486da5ca95439c0153ae050521c8a8108e74d3f58163027"} Feb 16 13:44:44 crc kubenswrapper[4812]: I0216 13:44:44.736461 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-9fzvk" event={"ID":"a975b82f-9342-4bf8-812a-0d2188aeef74","Type":"ContainerStarted","Data":"d750e06bf8280b79b732a794f74f3a0c0d33357932e0c340a3a98980aafb0fff"} Feb 16 13:44:44 crc kubenswrapper[4812]: I0216 13:44:44.878622 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9kvgm" Feb 16 13:44:44 crc kubenswrapper[4812]: I0216 13:44:44.879069 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9kvgm" Feb 16 13:44:45 crc kubenswrapper[4812]: I0216 13:44:45.253184 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-9kvgm"] Feb 16 13:44:45 crc kubenswrapper[4812]: I0216 13:44:45.757948 4812 generic.go:334] "Generic (PLEG): container finished" podID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerID="69f6102fd067a315bb5fa977a52583563bb8e2109920c634e663dafde3b8d90e" exitCode=0 Feb 16 13:44:45 crc kubenswrapper[4812]: I0216 13:44:45.758211 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" event={"ID":"3c55e49a-a30d-4950-a690-c33d9f8a31e0","Type":"ContainerDied","Data":"69f6102fd067a315bb5fa977a52583563bb8e2109920c634e663dafde3b8d90e"} Feb 16 13:44:45 crc kubenswrapper[4812]: I0216 13:44:45.758236 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" event={"ID":"3c55e49a-a30d-4950-a690-c33d9f8a31e0","Type":"ContainerStarted","Data":"7cdd40ec1858c86be76b1abaa1c0c47ea05268682d8c62fb36cfc403870db38c"} Feb 16 13:44:45 crc kubenswrapper[4812]: I0216 13:44:45.758258 4812 scope.go:117] "RemoveContainer" containerID="a72a25e58c4955d9061eb3209cc9f8e59817b6546c4b8aafdb6b583903cc792d" Feb 16 13:44:45 crc kubenswrapper[4812]: I0216 13:44:45.779429 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-9kvgm" event={"ID":"90f1d72f-119e-4971-bfad-a3210f07e473","Type":"ContainerStarted","Data":"588af788c3a9501ca1500552ce9ad78f3ef3dfd52e34eb4aea541aa6224567e5"} Feb 16 13:44:45 crc kubenswrapper[4812]: I0216 13:44:45.886965 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-x8xvj" Feb 16 13:44:45 crc kubenswrapper[4812]: I0216 13:44:45.887488 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-x8xvj" Feb 16 13:44:46 crc kubenswrapper[4812]: I0216 13:44:46.434139 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-x8xvj"] Feb 16 13:44:46 crc kubenswrapper[4812]: W0216 13:44:46.454313 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40d22a6e_3db9_43c6_9ca4_560ef32ca2a1.slice/crio-027c449fe56acb420d6d472a047c47778e95f5daf6a304492d5840f4e05db277 WatchSource:0}: Error finding container 027c449fe56acb420d6d472a047c47778e95f5daf6a304492d5840f4e05db277: Status 404 returned error can't find the container with id 027c449fe56acb420d6d472a047c47778e95f5daf6a304492d5840f4e05db277 Feb 16 13:44:46 crc kubenswrapper[4812]: I0216 13:44:46.799642 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-x8xvj" event={"ID":"40d22a6e-3db9-43c6-9ca4-560ef32ca2a1","Type":"ContainerStarted","Data":"027c449fe56acb420d6d472a047c47778e95f5daf6a304492d5840f4e05db277"} Feb 16 13:44:47 crc kubenswrapper[4812]: I0216 13:44:47.882403 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q86d9" Feb 16 13:44:47 crc kubenswrapper[4812]: I0216 13:44:47.882973 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q86d9" Feb 16 13:44:53 crc kubenswrapper[4812]: I0216 13:44:53.910116 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fj6r6" Feb 16 13:44:55 crc kubenswrapper[4812]: I0216 13:44:55.466965 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8jqkv"] Feb 16 13:44:55 crc kubenswrapper[4812]: I0216 13:44:55.468293 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8jqkv" Feb 16 13:44:55 crc kubenswrapper[4812]: I0216 13:44:55.474067 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96b0b68d-4734-4d69-aee5-2da69c86a479-utilities\") pod \"redhat-marketplace-8jqkv\" (UID: \"96b0b68d-4734-4d69-aee5-2da69c86a479\") " pod="openshift-marketplace/redhat-marketplace-8jqkv" Feb 16 13:44:55 crc kubenswrapper[4812]: I0216 13:44:55.478719 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8jqkv"] Feb 16 13:44:55 crc kubenswrapper[4812]: I0216 13:44:55.575787 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srkt9\" (UniqueName: \"kubernetes.io/projected/96b0b68d-4734-4d69-aee5-2da69c86a479-kube-api-access-srkt9\") pod \"redhat-marketplace-8jqkv\" (UID: \"96b0b68d-4734-4d69-aee5-2da69c86a479\") " pod="openshift-marketplace/redhat-marketplace-8jqkv" Feb 16 13:44:55 crc kubenswrapper[4812]: I0216 13:44:55.576107 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96b0b68d-4734-4d69-aee5-2da69c86a479-catalog-content\") pod \"redhat-marketplace-8jqkv\" (UID: \"96b0b68d-4734-4d69-aee5-2da69c86a479\") " pod="openshift-marketplace/redhat-marketplace-8jqkv" Feb 16 13:44:55 crc kubenswrapper[4812]: I0216 13:44:55.576168 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96b0b68d-4734-4d69-aee5-2da69c86a479-utilities\") pod \"redhat-marketplace-8jqkv\" (UID: \"96b0b68d-4734-4d69-aee5-2da69c86a479\") " pod="openshift-marketplace/redhat-marketplace-8jqkv" Feb 16 13:44:55 crc kubenswrapper[4812]: I0216 13:44:55.576624 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96b0b68d-4734-4d69-aee5-2da69c86a479-utilities\") pod \"redhat-marketplace-8jqkv\" (UID: \"96b0b68d-4734-4d69-aee5-2da69c86a479\") " pod="openshift-marketplace/redhat-marketplace-8jqkv" Feb 16 13:44:55 crc kubenswrapper[4812]: I0216 13:44:55.677196 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srkt9\" (UniqueName: \"kubernetes.io/projected/96b0b68d-4734-4d69-aee5-2da69c86a479-kube-api-access-srkt9\") pod \"redhat-marketplace-8jqkv\" (UID: \"96b0b68d-4734-4d69-aee5-2da69c86a479\") " pod="openshift-marketplace/redhat-marketplace-8jqkv" Feb 16 13:44:55 crc kubenswrapper[4812]: I0216 13:44:55.677254 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96b0b68d-4734-4d69-aee5-2da69c86a479-catalog-content\") pod \"redhat-marketplace-8jqkv\" (UID: \"96b0b68d-4734-4d69-aee5-2da69c86a479\") " pod="openshift-marketplace/redhat-marketplace-8jqkv" Feb 16 13:44:55 crc kubenswrapper[4812]: I0216 13:44:55.679254 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96b0b68d-4734-4d69-aee5-2da69c86a479-catalog-content\") pod \"redhat-marketplace-8jqkv\" (UID: \"96b0b68d-4734-4d69-aee5-2da69c86a479\") " pod="openshift-marketplace/redhat-marketplace-8jqkv" Feb 16 13:44:55 crc kubenswrapper[4812]: I0216 13:44:55.698479 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srkt9\" (UniqueName: \"kubernetes.io/projected/96b0b68d-4734-4d69-aee5-2da69c86a479-kube-api-access-srkt9\") pod \"redhat-marketplace-8jqkv\" (UID: \"96b0b68d-4734-4d69-aee5-2da69c86a479\") " pod="openshift-marketplace/redhat-marketplace-8jqkv" Feb 16 13:44:55 crc kubenswrapper[4812]: I0216 13:44:55.784452 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8jqkv" Feb 16 13:44:59 crc kubenswrapper[4812]: I0216 13:44:59.013834 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-q86d9"] Feb 16 13:44:59 crc kubenswrapper[4812]: W0216 13:44:59.023537 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e3d83dd_a02e_46b8_8cb0_e3840347e5ad.slice/crio-bd9c68cb36e3552255c642e19a3f4c5e47d618fca42808d77fc293793cb8b4be WatchSource:0}: Error finding container bd9c68cb36e3552255c642e19a3f4c5e47d618fca42808d77fc293793cb8b4be: Status 404 returned error can't find the container with id bd9c68cb36e3552255c642e19a3f4c5e47d618fca42808d77fc293793cb8b4be Feb 16 13:44:59 crc kubenswrapper[4812]: I0216 13:44:59.193315 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8jqkv"] Feb 16 13:44:59 crc kubenswrapper[4812]: I0216 13:44:59.207904 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-x8xvj" event={"ID":"40d22a6e-3db9-43c6-9ca4-560ef32ca2a1","Type":"ContainerStarted","Data":"10ee1a257f2d4df830351d545ae24fcf20ebab8ddad324c5e63503d283248fe0"} Feb 16 13:44:59 crc kubenswrapper[4812]: I0216 13:44:59.213039 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-9kvgm" event={"ID":"90f1d72f-119e-4971-bfad-a3210f07e473","Type":"ContainerStarted","Data":"1cf8ee94049a6d1f047e5dac4d93e8b2b281d2d9bd551691e09ac0844e7d6c89"} Feb 16 13:44:59 crc kubenswrapper[4812]: I0216 13:44:59.213229 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-9kvgm" Feb 16 13:44:59 crc kubenswrapper[4812]: I0216 13:44:59.214413 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q86d9" event={"ID":"9e3d83dd-a02e-46b8-8cb0-e3840347e5ad","Type":"ContainerStarted","Data":"bd9c68cb36e3552255c642e19a3f4c5e47d618fca42808d77fc293793cb8b4be"} Feb 16 13:44:59 crc kubenswrapper[4812]: I0216 13:44:59.214421 4812 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-9kvgm container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.41:8081/healthz\": dial tcp 10.217.0.41:8081: connect: connection refused" start-of-body= Feb 16 13:44:59 crc kubenswrapper[4812]: I0216 13:44:59.214567 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-9kvgm" podUID="90f1d72f-119e-4971-bfad-a3210f07e473" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.41:8081/healthz\": dial tcp 10.217.0.41:8081: connect: connection refused" Feb 16 13:44:59 crc kubenswrapper[4812]: I0216 13:44:59.219582 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-2w7pj" event={"ID":"cc311aeb-05a8-4b4d-abe0-c35db319d48a","Type":"ContainerStarted","Data":"4be02f0b66b2c97202a2a1b0120a70081f1a98a6cf16fef20224a444cad40aee"} Feb 16 13:44:59 crc kubenswrapper[4812]: I0216 13:44:59.228107 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-9fzvk" event={"ID":"a975b82f-9342-4bf8-812a-0d2188aeef74","Type":"ContainerStarted","Data":"f27b1860fc68f64e8a430965c7819ea5b734fc8b3921784b9577e27f5e0ce0c8"} Feb 16 13:44:59 crc kubenswrapper[4812]: I0216 13:44:59.228476 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-9fzvk" Feb 16 13:44:59 crc kubenswrapper[4812]: I0216 13:44:59.228456 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-x8xvj" podStartSLOduration=17.81237318 podStartE2EDuration="30.228419856s" podCreationTimestamp="2026-02-16 13:44:29 +0000 UTC" firstStartedPulling="2026-02-16 13:44:46.458461868 +0000 UTC m=+775.522792569" lastFinishedPulling="2026-02-16 13:44:58.874508544 +0000 UTC m=+787.938839245" observedRunningTime="2026-02-16 13:44:59.227751728 +0000 UTC m=+788.292082429" watchObservedRunningTime="2026-02-16 13:44:59.228419856 +0000 UTC m=+788.292750567" Feb 16 13:44:59 crc kubenswrapper[4812]: I0216 13:44:59.253237 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-9kvgm" podStartSLOduration=16.616252917 podStartE2EDuration="30.253218363s" podCreationTimestamp="2026-02-16 13:44:29 +0000 UTC" firstStartedPulling="2026-02-16 13:44:45.279466566 +0000 UTC m=+774.343797267" lastFinishedPulling="2026-02-16 13:44:58.916432012 +0000 UTC m=+787.980762713" observedRunningTime="2026-02-16 13:44:59.249158819 +0000 UTC m=+788.313489530" watchObservedRunningTime="2026-02-16 13:44:59.253218363 +0000 UTC m=+788.317549064" Feb 16 13:44:59 crc kubenswrapper[4812]: I0216 13:44:59.273231 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d55f77766-2w7pj" podStartSLOduration=15.774850959 podStartE2EDuration="30.273214805s" podCreationTimestamp="2026-02-16 13:44:29 +0000 UTC" firstStartedPulling="2026-02-16 13:44:44.376992512 +0000 UTC m=+773.441323213" lastFinishedPulling="2026-02-16 13:44:58.875356358 +0000 UTC m=+787.939687059" observedRunningTime="2026-02-16 13:44:59.271762474 +0000 UTC m=+788.336093205" watchObservedRunningTime="2026-02-16 13:44:59.273214805 +0000 UTC m=+788.337545506" Feb 16 13:44:59 crc kubenswrapper[4812]: I0216 13:44:59.311432 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-9fzvk" podStartSLOduration=14.90266554 podStartE2EDuration="29.311410838s" podCreationTimestamp="2026-02-16 13:44:30 +0000 UTC" firstStartedPulling="2026-02-16 13:44:44.457656968 +0000 UTC m=+773.521987669" lastFinishedPulling="2026-02-16 13:44:58.866402266 +0000 UTC m=+787.930732967" observedRunningTime="2026-02-16 13:44:59.309074122 +0000 UTC m=+788.373404833" watchObservedRunningTime="2026-02-16 13:44:59.311410838 +0000 UTC m=+788.375741539" Feb 16 13:45:00 crc kubenswrapper[4812]: I0216 13:45:00.160052 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520825-57tb9"] Feb 16 13:45:00 crc kubenswrapper[4812]: I0216 13:45:00.161062 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-57tb9" Feb 16 13:45:00 crc kubenswrapper[4812]: I0216 13:45:00.163313 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 13:45:00 crc kubenswrapper[4812]: I0216 13:45:00.163523 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 13:45:00 crc kubenswrapper[4812]: I0216 13:45:00.170154 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520825-57tb9"] Feb 16 13:45:00 crc kubenswrapper[4812]: I0216 13:45:00.210289 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec93766a-8778-44ae-a75d-b348dbb218e5-config-volume\") pod \"collect-profiles-29520825-57tb9\" (UID: \"ec93766a-8778-44ae-a75d-b348dbb218e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-57tb9" Feb 16 13:45:00 crc kubenswrapper[4812]: I0216 13:45:00.210357 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7kbc\" (UniqueName: \"kubernetes.io/projected/ec93766a-8778-44ae-a75d-b348dbb218e5-kube-api-access-d7kbc\") pod \"collect-profiles-29520825-57tb9\" (UID: \"ec93766a-8778-44ae-a75d-b348dbb218e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-57tb9" Feb 16 13:45:00 crc kubenswrapper[4812]: I0216 13:45:00.210541 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec93766a-8778-44ae-a75d-b348dbb218e5-secret-volume\") pod \"collect-profiles-29520825-57tb9\" (UID: \"ec93766a-8778-44ae-a75d-b348dbb218e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-57tb9" Feb 16 13:45:00 crc kubenswrapper[4812]: I0216 13:45:00.235370 4812 generic.go:334] "Generic (PLEG): container finished" podID="96b0b68d-4734-4d69-aee5-2da69c86a479" containerID="871fd48105e003c5a5a6823146433017f81000cf1af425fae1bd2367e48cd91b" exitCode=0 Feb 16 13:45:00 crc kubenswrapper[4812]: I0216 13:45:00.235519 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8jqkv" event={"ID":"96b0b68d-4734-4d69-aee5-2da69c86a479","Type":"ContainerDied","Data":"871fd48105e003c5a5a6823146433017f81000cf1af425fae1bd2367e48cd91b"} Feb 16 13:45:00 crc kubenswrapper[4812]: I0216 13:45:00.235563 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8jqkv" event={"ID":"96b0b68d-4734-4d69-aee5-2da69c86a479","Type":"ContainerStarted","Data":"72005a1b8a83642d263df9514ef5614c3565d2c953aeb3b781cf950bd7886b4b"} Feb 16 13:45:00 crc kubenswrapper[4812]: I0216 13:45:00.272153 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-9kvgm" Feb 16 13:45:00 crc kubenswrapper[4812]: I0216 13:45:00.311273 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7kbc\" (UniqueName: \"kubernetes.io/projected/ec93766a-8778-44ae-a75d-b348dbb218e5-kube-api-access-d7kbc\") pod \"collect-profiles-29520825-57tb9\" (UID: \"ec93766a-8778-44ae-a75d-b348dbb218e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-57tb9" Feb 16 13:45:00 crc kubenswrapper[4812]: I0216 13:45:00.311399 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec93766a-8778-44ae-a75d-b348dbb218e5-secret-volume\") pod \"collect-profiles-29520825-57tb9\" (UID: \"ec93766a-8778-44ae-a75d-b348dbb218e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-57tb9" Feb 16 13:45:00 crc kubenswrapper[4812]: I0216 13:45:00.311554 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec93766a-8778-44ae-a75d-b348dbb218e5-config-volume\") pod \"collect-profiles-29520825-57tb9\" (UID: \"ec93766a-8778-44ae-a75d-b348dbb218e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-57tb9" Feb 16 13:45:00 crc kubenswrapper[4812]: I0216 13:45:00.318498 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec93766a-8778-44ae-a75d-b348dbb218e5-config-volume\") pod \"collect-profiles-29520825-57tb9\" (UID: \"ec93766a-8778-44ae-a75d-b348dbb218e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-57tb9" Feb 16 13:45:00 crc kubenswrapper[4812]: I0216 13:45:00.324592 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec93766a-8778-44ae-a75d-b348dbb218e5-secret-volume\") pod \"collect-profiles-29520825-57tb9\" (UID: \"ec93766a-8778-44ae-a75d-b348dbb218e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-57tb9" Feb 16 13:45:00 crc kubenswrapper[4812]: I0216 13:45:00.345424 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7kbc\" (UniqueName: \"kubernetes.io/projected/ec93766a-8778-44ae-a75d-b348dbb218e5-kube-api-access-d7kbc\") pod \"collect-profiles-29520825-57tb9\" (UID: \"ec93766a-8778-44ae-a75d-b348dbb218e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-57tb9" Feb 16 13:45:00 crc kubenswrapper[4812]: I0216 13:45:00.495434 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-57tb9" Feb 16 13:45:00 crc kubenswrapper[4812]: I0216 13:45:00.974328 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520825-57tb9"] Feb 16 13:45:01 crc kubenswrapper[4812]: I0216 13:45:01.242610 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8jqkv" event={"ID":"96b0b68d-4734-4d69-aee5-2da69c86a479","Type":"ContainerStarted","Data":"b7dafcf065b2e1f038084f8e53bd16bb7cc605c62aec5c9762d4ec6c0faf7e43"} Feb 16 13:45:01 crc kubenswrapper[4812]: I0216 13:45:01.246182 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-57tb9" event={"ID":"ec93766a-8778-44ae-a75d-b348dbb218e5","Type":"ContainerStarted","Data":"b53207774a057ac0da7d2b36f4cd961b5843e767ee21da0f23d774e3f456b592"} Feb 16 13:45:01 crc kubenswrapper[4812]: I0216 13:45:01.246217 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-57tb9" event={"ID":"ec93766a-8778-44ae-a75d-b348dbb218e5","Type":"ContainerStarted","Data":"e3655dbc8cf4e23af3a60f6a2499a3dcef1314b1c1810bc31cbef3173bde29f6"} Feb 16 13:45:01 crc kubenswrapper[4812]: I0216 13:45:01.332783 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-57tb9" podStartSLOduration=1.332765974 podStartE2EDuration="1.332765974s" podCreationTimestamp="2026-02-16 13:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:45:01.331814747 +0000 UTC m=+790.396145458" watchObservedRunningTime="2026-02-16 13:45:01.332765974 +0000 UTC m=+790.397096685" Feb 16 13:45:02 crc kubenswrapper[4812]: I0216 13:45:02.254084 4812 generic.go:334] "Generic (PLEG): container finished" podID="ec93766a-8778-44ae-a75d-b348dbb218e5" containerID="b53207774a057ac0da7d2b36f4cd961b5843e767ee21da0f23d774e3f456b592" exitCode=0 Feb 16 13:45:02 crc kubenswrapper[4812]: I0216 13:45:02.254166 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-57tb9" event={"ID":"ec93766a-8778-44ae-a75d-b348dbb218e5","Type":"ContainerDied","Data":"b53207774a057ac0da7d2b36f4cd961b5843e767ee21da0f23d774e3f456b592"} Feb 16 13:45:02 crc kubenswrapper[4812]: I0216 13:45:02.258207 4812 generic.go:334] "Generic (PLEG): container finished" podID="96b0b68d-4734-4d69-aee5-2da69c86a479" containerID="b7dafcf065b2e1f038084f8e53bd16bb7cc605c62aec5c9762d4ec6c0faf7e43" exitCode=0 Feb 16 13:45:02 crc kubenswrapper[4812]: I0216 13:45:02.258245 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8jqkv" event={"ID":"96b0b68d-4734-4d69-aee5-2da69c86a479","Type":"ContainerDied","Data":"b7dafcf065b2e1f038084f8e53bd16bb7cc605c62aec5c9762d4ec6c0faf7e43"} Feb 16 13:45:03 crc kubenswrapper[4812]: I0216 13:45:03.267988 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8jqkv" event={"ID":"96b0b68d-4734-4d69-aee5-2da69c86a479","Type":"ContainerStarted","Data":"8e83c435d6176d48ff0c1a16fb17e882923e135f61d4bc1a2898a90876319b38"} Feb 16 13:45:03 crc kubenswrapper[4812]: I0216 13:45:03.269841 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q86d9" event={"ID":"9e3d83dd-a02e-46b8-8cb0-e3840347e5ad","Type":"ContainerStarted","Data":"4814fd8c1fd1f863958bf9e1f6321d2079774c4f0ba1d4fd0a736e4cc312ecba"} Feb 16 13:45:03 crc kubenswrapper[4812]: I0216 13:45:03.287088 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8jqkv" podStartSLOduration=5.677862796 podStartE2EDuration="8.287065597s" podCreationTimestamp="2026-02-16 13:44:55 +0000 UTC" firstStartedPulling="2026-02-16 13:45:00.237090113 +0000 UTC m=+789.301420814" lastFinishedPulling="2026-02-16 13:45:02.846292914 +0000 UTC m=+791.910623615" observedRunningTime="2026-02-16 13:45:03.284403892 +0000 UTC m=+792.348734593" watchObservedRunningTime="2026-02-16 13:45:03.287065597 +0000 UTC m=+792.351396298" Feb 16 13:45:03 crc kubenswrapper[4812]: I0216 13:45:03.302056 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q86d9" podStartSLOduration=30.946360396 podStartE2EDuration="34.302032748s" podCreationTimestamp="2026-02-16 13:44:29 +0000 UTC" firstStartedPulling="2026-02-16 13:44:59.027400159 +0000 UTC m=+788.091730860" lastFinishedPulling="2026-02-16 13:45:02.383072511 +0000 UTC m=+791.447403212" observedRunningTime="2026-02-16 13:45:03.299766324 +0000 UTC m=+792.364097025" watchObservedRunningTime="2026-02-16 13:45:03.302032748 +0000 UTC m=+792.366363449" Feb 16 13:45:03 crc kubenswrapper[4812]: I0216 13:45:03.748329 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-57tb9" Feb 16 13:45:03 crc kubenswrapper[4812]: I0216 13:45:03.863040 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7kbc\" (UniqueName: \"kubernetes.io/projected/ec93766a-8778-44ae-a75d-b348dbb218e5-kube-api-access-d7kbc\") pod \"ec93766a-8778-44ae-a75d-b348dbb218e5\" (UID: \"ec93766a-8778-44ae-a75d-b348dbb218e5\") " Feb 16 13:45:03 crc kubenswrapper[4812]: I0216 13:45:03.863155 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec93766a-8778-44ae-a75d-b348dbb218e5-config-volume\") pod \"ec93766a-8778-44ae-a75d-b348dbb218e5\" (UID: \"ec93766a-8778-44ae-a75d-b348dbb218e5\") " Feb 16 13:45:03 crc kubenswrapper[4812]: I0216 13:45:03.863231 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec93766a-8778-44ae-a75d-b348dbb218e5-secret-volume\") pod \"ec93766a-8778-44ae-a75d-b348dbb218e5\" (UID: \"ec93766a-8778-44ae-a75d-b348dbb218e5\") " Feb 16 13:45:03 crc kubenswrapper[4812]: I0216 13:45:03.863862 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec93766a-8778-44ae-a75d-b348dbb218e5-config-volume" (OuterVolumeSpecName: "config-volume") pod "ec93766a-8778-44ae-a75d-b348dbb218e5" (UID: "ec93766a-8778-44ae-a75d-b348dbb218e5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:45:03 crc kubenswrapper[4812]: I0216 13:45:03.864514 4812 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec93766a-8778-44ae-a75d-b348dbb218e5-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 13:45:03 crc kubenswrapper[4812]: I0216 13:45:03.868354 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec93766a-8778-44ae-a75d-b348dbb218e5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ec93766a-8778-44ae-a75d-b348dbb218e5" (UID: "ec93766a-8778-44ae-a75d-b348dbb218e5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:45:03 crc kubenswrapper[4812]: I0216 13:45:03.876727 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec93766a-8778-44ae-a75d-b348dbb218e5-kube-api-access-d7kbc" (OuterVolumeSpecName: "kube-api-access-d7kbc") pod "ec93766a-8778-44ae-a75d-b348dbb218e5" (UID: "ec93766a-8778-44ae-a75d-b348dbb218e5"). InnerVolumeSpecName "kube-api-access-d7kbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:45:03 crc kubenswrapper[4812]: I0216 13:45:03.966131 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7kbc\" (UniqueName: \"kubernetes.io/projected/ec93766a-8778-44ae-a75d-b348dbb218e5-kube-api-access-d7kbc\") on node \"crc\" DevicePath \"\"" Feb 16 13:45:03 crc kubenswrapper[4812]: I0216 13:45:03.966559 4812 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec93766a-8778-44ae-a75d-b348dbb218e5-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 13:45:04 crc kubenswrapper[4812]: I0216 13:45:04.276370 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-57tb9" event={"ID":"ec93766a-8778-44ae-a75d-b348dbb218e5","Type":"ContainerDied","Data":"e3655dbc8cf4e23af3a60f6a2499a3dcef1314b1c1810bc31cbef3173bde29f6"} Feb 16 13:45:04 crc kubenswrapper[4812]: I0216 13:45:04.276413 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3655dbc8cf4e23af3a60f6a2499a3dcef1314b1c1810bc31cbef3173bde29f6" Feb 16 13:45:04 crc kubenswrapper[4812]: I0216 13:45:04.276544 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520825-57tb9" Feb 16 13:45:05 crc kubenswrapper[4812]: I0216 13:45:05.785306 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8jqkv" Feb 16 13:45:05 crc kubenswrapper[4812]: I0216 13:45:05.786422 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8jqkv" Feb 16 13:45:05 crc kubenswrapper[4812]: I0216 13:45:05.885659 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8jqkv" Feb 16 13:45:07 crc kubenswrapper[4812]: I0216 13:45:07.329685 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8jqkv" Feb 16 13:45:07 crc kubenswrapper[4812]: I0216 13:45:07.375319 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8jqkv"] Feb 16 13:45:09 crc kubenswrapper[4812]: I0216 13:45:09.300194 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8jqkv" podUID="96b0b68d-4734-4d69-aee5-2da69c86a479" containerName="registry-server" containerID="cri-o://8e83c435d6176d48ff0c1a16fb17e882923e135f61d4bc1a2898a90876319b38" gracePeriod=2 Feb 16 13:45:09 crc kubenswrapper[4812]: I0216 13:45:09.796953 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8jqkv" Feb 16 13:45:09 crc kubenswrapper[4812]: I0216 13:45:09.842487 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96b0b68d-4734-4d69-aee5-2da69c86a479-catalog-content\") pod \"96b0b68d-4734-4d69-aee5-2da69c86a479\" (UID: \"96b0b68d-4734-4d69-aee5-2da69c86a479\") " Feb 16 13:45:09 crc kubenswrapper[4812]: I0216 13:45:09.842593 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srkt9\" (UniqueName: \"kubernetes.io/projected/96b0b68d-4734-4d69-aee5-2da69c86a479-kube-api-access-srkt9\") pod \"96b0b68d-4734-4d69-aee5-2da69c86a479\" (UID: \"96b0b68d-4734-4d69-aee5-2da69c86a479\") " Feb 16 13:45:09 crc kubenswrapper[4812]: I0216 13:45:09.842737 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96b0b68d-4734-4d69-aee5-2da69c86a479-utilities\") pod \"96b0b68d-4734-4d69-aee5-2da69c86a479\" (UID: \"96b0b68d-4734-4d69-aee5-2da69c86a479\") " Feb 16 13:45:09 crc kubenswrapper[4812]: I0216 13:45:09.844641 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96b0b68d-4734-4d69-aee5-2da69c86a479-utilities" (OuterVolumeSpecName: "utilities") pod "96b0b68d-4734-4d69-aee5-2da69c86a479" (UID: "96b0b68d-4734-4d69-aee5-2da69c86a479"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:45:09 crc kubenswrapper[4812]: I0216 13:45:09.861598 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b0b68d-4734-4d69-aee5-2da69c86a479-kube-api-access-srkt9" (OuterVolumeSpecName: "kube-api-access-srkt9") pod "96b0b68d-4734-4d69-aee5-2da69c86a479" (UID: "96b0b68d-4734-4d69-aee5-2da69c86a479"). InnerVolumeSpecName "kube-api-access-srkt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:45:09 crc kubenswrapper[4812]: I0216 13:45:09.870993 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96b0b68d-4734-4d69-aee5-2da69c86a479-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "96b0b68d-4734-4d69-aee5-2da69c86a479" (UID: "96b0b68d-4734-4d69-aee5-2da69c86a479"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:45:09 crc kubenswrapper[4812]: I0216 13:45:09.944825 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96b0b68d-4734-4d69-aee5-2da69c86a479-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:45:09 crc kubenswrapper[4812]: I0216 13:45:09.944865 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96b0b68d-4734-4d69-aee5-2da69c86a479-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:45:09 crc kubenswrapper[4812]: I0216 13:45:09.944878 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srkt9\" (UniqueName: \"kubernetes.io/projected/96b0b68d-4734-4d69-aee5-2da69c86a479-kube-api-access-srkt9\") on node \"crc\" DevicePath \"\"" Feb 16 13:45:10 crc kubenswrapper[4812]: I0216 13:45:10.307256 4812 generic.go:334] "Generic (PLEG): container finished" podID="96b0b68d-4734-4d69-aee5-2da69c86a479" containerID="8e83c435d6176d48ff0c1a16fb17e882923e135f61d4bc1a2898a90876319b38" exitCode=0 Feb 16 13:45:10 crc kubenswrapper[4812]: I0216 13:45:10.307298 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8jqkv" event={"ID":"96b0b68d-4734-4d69-aee5-2da69c86a479","Type":"ContainerDied","Data":"8e83c435d6176d48ff0c1a16fb17e882923e135f61d4bc1a2898a90876319b38"} Feb 16 13:45:10 crc kubenswrapper[4812]: I0216 13:45:10.307324 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8jqkv" Feb 16 13:45:10 crc kubenswrapper[4812]: I0216 13:45:10.307353 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8jqkv" event={"ID":"96b0b68d-4734-4d69-aee5-2da69c86a479","Type":"ContainerDied","Data":"72005a1b8a83642d263df9514ef5614c3565d2c953aeb3b781cf950bd7886b4b"} Feb 16 13:45:10 crc kubenswrapper[4812]: I0216 13:45:10.307377 4812 scope.go:117] "RemoveContainer" containerID="8e83c435d6176d48ff0c1a16fb17e882923e135f61d4bc1a2898a90876319b38" Feb 16 13:45:10 crc kubenswrapper[4812]: I0216 13:45:10.327990 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8jqkv"] Feb 16 13:45:10 crc kubenswrapper[4812]: I0216 13:45:10.332406 4812 scope.go:117] "RemoveContainer" containerID="b7dafcf065b2e1f038084f8e53bd16bb7cc605c62aec5c9762d4ec6c0faf7e43" Feb 16 13:45:10 crc kubenswrapper[4812]: I0216 13:45:10.332853 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8jqkv"] Feb 16 13:45:10 crc kubenswrapper[4812]: I0216 13:45:10.347584 4812 scope.go:117] "RemoveContainer" containerID="871fd48105e003c5a5a6823146433017f81000cf1af425fae1bd2367e48cd91b" Feb 16 13:45:10 crc kubenswrapper[4812]: I0216 13:45:10.365422 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-9fzvk" Feb 16 13:45:10 crc kubenswrapper[4812]: I0216 13:45:10.366623 4812 scope.go:117] "RemoveContainer" containerID="8e83c435d6176d48ff0c1a16fb17e882923e135f61d4bc1a2898a90876319b38" Feb 16 13:45:10 crc kubenswrapper[4812]: E0216 13:45:10.366978 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e83c435d6176d48ff0c1a16fb17e882923e135f61d4bc1a2898a90876319b38\": container with ID starting with 8e83c435d6176d48ff0c1a16fb17e882923e135f61d4bc1a2898a90876319b38 not found: ID does not exist" containerID="8e83c435d6176d48ff0c1a16fb17e882923e135f61d4bc1a2898a90876319b38" Feb 16 13:45:10 crc kubenswrapper[4812]: I0216 13:45:10.367011 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e83c435d6176d48ff0c1a16fb17e882923e135f61d4bc1a2898a90876319b38"} err="failed to get container status \"8e83c435d6176d48ff0c1a16fb17e882923e135f61d4bc1a2898a90876319b38\": rpc error: code = NotFound desc = could not find container \"8e83c435d6176d48ff0c1a16fb17e882923e135f61d4bc1a2898a90876319b38\": container with ID starting with 8e83c435d6176d48ff0c1a16fb17e882923e135f61d4bc1a2898a90876319b38 not found: ID does not exist" Feb 16 13:45:10 crc kubenswrapper[4812]: I0216 13:45:10.367034 4812 scope.go:117] "RemoveContainer" containerID="b7dafcf065b2e1f038084f8e53bd16bb7cc605c62aec5c9762d4ec6c0faf7e43" Feb 16 13:45:10 crc kubenswrapper[4812]: E0216 13:45:10.367334 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7dafcf065b2e1f038084f8e53bd16bb7cc605c62aec5c9762d4ec6c0faf7e43\": container with ID starting with b7dafcf065b2e1f038084f8e53bd16bb7cc605c62aec5c9762d4ec6c0faf7e43 not found: ID does not exist" containerID="b7dafcf065b2e1f038084f8e53bd16bb7cc605c62aec5c9762d4ec6c0faf7e43" Feb 16 13:45:10 crc kubenswrapper[4812]: I0216 13:45:10.367365 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7dafcf065b2e1f038084f8e53bd16bb7cc605c62aec5c9762d4ec6c0faf7e43"} err="failed to get container status \"b7dafcf065b2e1f038084f8e53bd16bb7cc605c62aec5c9762d4ec6c0faf7e43\": rpc error: code = NotFound desc = could not find container \"b7dafcf065b2e1f038084f8e53bd16bb7cc605c62aec5c9762d4ec6c0faf7e43\": container with ID starting with b7dafcf065b2e1f038084f8e53bd16bb7cc605c62aec5c9762d4ec6c0faf7e43 not found: ID does not exist" Feb 16 13:45:10 crc kubenswrapper[4812]: I0216 13:45:10.367383 4812 scope.go:117] "RemoveContainer" containerID="871fd48105e003c5a5a6823146433017f81000cf1af425fae1bd2367e48cd91b" Feb 16 13:45:10 crc kubenswrapper[4812]: E0216 13:45:10.367990 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"871fd48105e003c5a5a6823146433017f81000cf1af425fae1bd2367e48cd91b\": container with ID starting with 871fd48105e003c5a5a6823146433017f81000cf1af425fae1bd2367e48cd91b not found: ID does not exist" containerID="871fd48105e003c5a5a6823146433017f81000cf1af425fae1bd2367e48cd91b" Feb 16 13:45:10 crc kubenswrapper[4812]: I0216 13:45:10.368019 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"871fd48105e003c5a5a6823146433017f81000cf1af425fae1bd2367e48cd91b"} err="failed to get container status \"871fd48105e003c5a5a6823146433017f81000cf1af425fae1bd2367e48cd91b\": rpc error: code = NotFound desc = could not find container \"871fd48105e003c5a5a6823146433017f81000cf1af425fae1bd2367e48cd91b\": container with ID starting with 871fd48105e003c5a5a6823146433017f81000cf1af425fae1bd2367e48cd91b not found: ID does not exist" Feb 16 13:45:11 crc kubenswrapper[4812]: I0216 13:45:11.600306 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-5bdk9"] Feb 16 13:45:11 crc kubenswrapper[4812]: E0216 13:45:11.600556 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96b0b68d-4734-4d69-aee5-2da69c86a479" containerName="extract-utilities" Feb 16 13:45:11 crc kubenswrapper[4812]: I0216 13:45:11.600571 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="96b0b68d-4734-4d69-aee5-2da69c86a479" containerName="extract-utilities" Feb 16 13:45:11 crc kubenswrapper[4812]: E0216 13:45:11.600584 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec93766a-8778-44ae-a75d-b348dbb218e5" containerName="collect-profiles" Feb 16 13:45:11 crc kubenswrapper[4812]: I0216 13:45:11.600591 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec93766a-8778-44ae-a75d-b348dbb218e5" containerName="collect-profiles" Feb 16 13:45:11 crc kubenswrapper[4812]: E0216 13:45:11.600601 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96b0b68d-4734-4d69-aee5-2da69c86a479" containerName="extract-content" Feb 16 13:45:11 crc kubenswrapper[4812]: I0216 13:45:11.600608 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="96b0b68d-4734-4d69-aee5-2da69c86a479" containerName="extract-content" Feb 16 13:45:11 crc kubenswrapper[4812]: E0216 13:45:11.600628 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96b0b68d-4734-4d69-aee5-2da69c86a479" containerName="registry-server" Feb 16 13:45:11 crc kubenswrapper[4812]: I0216 13:45:11.600636 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="96b0b68d-4734-4d69-aee5-2da69c86a479" containerName="registry-server" Feb 16 13:45:11 crc kubenswrapper[4812]: I0216 13:45:11.600757 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="96b0b68d-4734-4d69-aee5-2da69c86a479" containerName="registry-server" Feb 16 13:45:11 crc kubenswrapper[4812]: I0216 13:45:11.600771 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec93766a-8778-44ae-a75d-b348dbb218e5" containerName="collect-profiles" Feb 16 13:45:11 crc kubenswrapper[4812]: I0216 13:45:11.601253 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-5bdk9" Feb 16 13:45:11 crc kubenswrapper[4812]: I0216 13:45:11.603898 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 16 13:45:11 crc kubenswrapper[4812]: I0216 13:45:11.604250 4812 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-kvdd6" Feb 16 13:45:11 crc kubenswrapper[4812]: I0216 13:45:11.604486 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 16 13:45:11 crc kubenswrapper[4812]: I0216 13:45:11.623063 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-5bdk9"] Feb 16 13:45:11 crc kubenswrapper[4812]: I0216 13:45:11.631397 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-bss4n"] Feb 16 13:45:11 crc kubenswrapper[4812]: I0216 13:45:11.632338 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-bss4n" Feb 16 13:45:11 crc kubenswrapper[4812]: I0216 13:45:11.635278 4812 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-lmlnn" Feb 16 13:45:11 crc kubenswrapper[4812]: I0216 13:45:11.648998 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-mb4rm"] Feb 16 13:45:11 crc kubenswrapper[4812]: I0216 13:45:11.649919 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-mb4rm" Feb 16 13:45:11 crc kubenswrapper[4812]: I0216 13:45:11.653477 4812 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-s94x6" Feb 16 13:45:11 crc kubenswrapper[4812]: I0216 13:45:11.654537 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-bss4n"] Feb 16 13:45:11 crc kubenswrapper[4812]: I0216 13:45:11.661525 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-mb4rm"] Feb 16 13:45:11 crc kubenswrapper[4812]: I0216 13:45:11.667411 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpt9l\" (UniqueName: \"kubernetes.io/projected/9b3c3773-e9da-431b-863a-0a3df06713d0-kube-api-access-mpt9l\") pod \"cert-manager-cainjector-cf98fcc89-5bdk9\" (UID: \"9b3c3773-e9da-431b-863a-0a3df06713d0\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-5bdk9" Feb 16 13:45:11 crc kubenswrapper[4812]: I0216 13:45:11.667472 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb7wd\" (UniqueName: \"kubernetes.io/projected/9cb816f6-841f-4759-9598-ec4ea11806c4-kube-api-access-zb7wd\") pod \"cert-manager-webhook-687f57d79b-mb4rm\" (UID: \"9cb816f6-841f-4759-9598-ec4ea11806c4\") " pod="cert-manager/cert-manager-webhook-687f57d79b-mb4rm" Feb 16 13:45:11 crc kubenswrapper[4812]: I0216 13:45:11.667540 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lz68\" (UniqueName: \"kubernetes.io/projected/ffb00ae0-8006-44ba-8c11-eed07e479ec6-kube-api-access-8lz68\") pod \"cert-manager-858654f9db-bss4n\" (UID: \"ffb00ae0-8006-44ba-8c11-eed07e479ec6\") " pod="cert-manager/cert-manager-858654f9db-bss4n" Feb 16 13:45:11 crc kubenswrapper[4812]: I0216 13:45:11.768797 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lz68\" (UniqueName: \"kubernetes.io/projected/ffb00ae0-8006-44ba-8c11-eed07e479ec6-kube-api-access-8lz68\") pod \"cert-manager-858654f9db-bss4n\" (UID: \"ffb00ae0-8006-44ba-8c11-eed07e479ec6\") " pod="cert-manager/cert-manager-858654f9db-bss4n" Feb 16 13:45:11 crc kubenswrapper[4812]: I0216 13:45:11.768876 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpt9l\" (UniqueName: \"kubernetes.io/projected/9b3c3773-e9da-431b-863a-0a3df06713d0-kube-api-access-mpt9l\") pod \"cert-manager-cainjector-cf98fcc89-5bdk9\" (UID: \"9b3c3773-e9da-431b-863a-0a3df06713d0\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-5bdk9" Feb 16 13:45:11 crc kubenswrapper[4812]: I0216 13:45:11.768903 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb7wd\" (UniqueName: \"kubernetes.io/projected/9cb816f6-841f-4759-9598-ec4ea11806c4-kube-api-access-zb7wd\") pod \"cert-manager-webhook-687f57d79b-mb4rm\" (UID: \"9cb816f6-841f-4759-9598-ec4ea11806c4\") " pod="cert-manager/cert-manager-webhook-687f57d79b-mb4rm" Feb 16 13:45:11 crc kubenswrapper[4812]: I0216 13:45:11.786742 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb7wd\" (UniqueName: \"kubernetes.io/projected/9cb816f6-841f-4759-9598-ec4ea11806c4-kube-api-access-zb7wd\") pod \"cert-manager-webhook-687f57d79b-mb4rm\" (UID: \"9cb816f6-841f-4759-9598-ec4ea11806c4\") " pod="cert-manager/cert-manager-webhook-687f57d79b-mb4rm" Feb 16 13:45:11 crc kubenswrapper[4812]: I0216 13:45:11.790798 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpt9l\" (UniqueName: \"kubernetes.io/projected/9b3c3773-e9da-431b-863a-0a3df06713d0-kube-api-access-mpt9l\") pod \"cert-manager-cainjector-cf98fcc89-5bdk9\" (UID: \"9b3c3773-e9da-431b-863a-0a3df06713d0\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-5bdk9" Feb 16 13:45:11 crc kubenswrapper[4812]: I0216 13:45:11.795274 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lz68\" (UniqueName: \"kubernetes.io/projected/ffb00ae0-8006-44ba-8c11-eed07e479ec6-kube-api-access-8lz68\") pod \"cert-manager-858654f9db-bss4n\" (UID: \"ffb00ae0-8006-44ba-8c11-eed07e479ec6\") " pod="cert-manager/cert-manager-858654f9db-bss4n" Feb 16 13:45:11 crc kubenswrapper[4812]: I0216 13:45:11.886518 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b0b68d-4734-4d69-aee5-2da69c86a479" path="/var/lib/kubelet/pods/96b0b68d-4734-4d69-aee5-2da69c86a479/volumes" Feb 16 13:45:11 crc kubenswrapper[4812]: I0216 13:45:11.925226 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-5bdk9" Feb 16 13:45:11 crc kubenswrapper[4812]: I0216 13:45:11.950599 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-bss4n" Feb 16 13:45:11 crc kubenswrapper[4812]: I0216 13:45:11.971775 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-mb4rm" Feb 16 13:45:12 crc kubenswrapper[4812]: I0216 13:45:12.433157 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-5bdk9"] Feb 16 13:45:12 crc kubenswrapper[4812]: I0216 13:45:12.680263 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-mb4rm"] Feb 16 13:45:12 crc kubenswrapper[4812]: W0216 13:45:12.688566 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cb816f6_841f_4759_9598_ec4ea11806c4.slice/crio-e5c593d3dbaae62ff437180d19ffac7231dd84e94caf8263690ed09775f9c19e WatchSource:0}: Error finding container e5c593d3dbaae62ff437180d19ffac7231dd84e94caf8263690ed09775f9c19e: Status 404 returned error can't find the container with id e5c593d3dbaae62ff437180d19ffac7231dd84e94caf8263690ed09775f9c19e Feb 16 13:45:12 crc kubenswrapper[4812]: I0216 13:45:12.803251 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-bss4n"] Feb 16 13:45:12 crc kubenswrapper[4812]: W0216 13:45:12.810323 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffb00ae0_8006_44ba_8c11_eed07e479ec6.slice/crio-7d34f37bac0b50d9db6f11d9a2eb30eaafafeb74b751547547ecec947c3c199c WatchSource:0}: Error finding container 7d34f37bac0b50d9db6f11d9a2eb30eaafafeb74b751547547ecec947c3c199c: Status 404 returned error can't find the container with id 7d34f37bac0b50d9db6f11d9a2eb30eaafafeb74b751547547ecec947c3c199c Feb 16 13:45:13 crc kubenswrapper[4812]: I0216 13:45:13.383068 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-bss4n" event={"ID":"ffb00ae0-8006-44ba-8c11-eed07e479ec6","Type":"ContainerStarted","Data":"7d34f37bac0b50d9db6f11d9a2eb30eaafafeb74b751547547ecec947c3c199c"} Feb 16 13:45:13 crc kubenswrapper[4812]: I0216 13:45:13.384761 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-5bdk9" event={"ID":"9b3c3773-e9da-431b-863a-0a3df06713d0","Type":"ContainerStarted","Data":"d68ba66b8dfb1cfe8ff478e74bb351670e181456496d50380d89227c1e5b6994"} Feb 16 13:45:13 crc kubenswrapper[4812]: I0216 13:45:13.385765 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-mb4rm" event={"ID":"9cb816f6-841f-4759-9598-ec4ea11806c4","Type":"ContainerStarted","Data":"e5c593d3dbaae62ff437180d19ffac7231dd84e94caf8263690ed09775f9c19e"} Feb 16 13:45:19 crc kubenswrapper[4812]: I0216 13:45:19.535818 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-5bdk9" event={"ID":"9b3c3773-e9da-431b-863a-0a3df06713d0","Type":"ContainerStarted","Data":"0938ac345a75713fe330191a6638adbf09da97e870c4d252e736d7d40f915ca7"} Feb 16 13:45:19 crc kubenswrapper[4812]: I0216 13:45:19.557290 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-5bdk9" podStartSLOduration=2.162859209 podStartE2EDuration="8.557266798s" podCreationTimestamp="2026-02-16 13:45:11 +0000 UTC" firstStartedPulling="2026-02-16 13:45:12.485104559 +0000 UTC m=+801.549435260" lastFinishedPulling="2026-02-16 13:45:18.879512148 +0000 UTC m=+807.943842849" observedRunningTime="2026-02-16 13:45:19.556079395 +0000 UTC m=+808.620410106" watchObservedRunningTime="2026-02-16 13:45:19.557266798 +0000 UTC m=+808.621597499" Feb 16 13:45:21 crc kubenswrapper[4812]: I0216 13:45:21.548752 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-mb4rm" event={"ID":"9cb816f6-841f-4759-9598-ec4ea11806c4","Type":"ContainerStarted","Data":"1cb9137bc16bc5c3a6faf8f632340cf50ca615bb1765cfa3c1e9d3923b984044"} Feb 16 13:45:21 crc kubenswrapper[4812]: I0216 13:45:21.549348 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-mb4rm" Feb 16 13:45:21 crc kubenswrapper[4812]: I0216 13:45:21.550496 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-bss4n" event={"ID":"ffb00ae0-8006-44ba-8c11-eed07e479ec6","Type":"ContainerStarted","Data":"2649c2918c22e67740cb56d214984c8ad64d3639fbaf99eb1cd2688d2a9a54a0"} Feb 16 13:45:21 crc kubenswrapper[4812]: I0216 13:45:21.566134 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-mb4rm" podStartSLOduration=2.808129567 podStartE2EDuration="10.566113383s" podCreationTimestamp="2026-02-16 13:45:11 +0000 UTC" firstStartedPulling="2026-02-16 13:45:12.691008174 +0000 UTC m=+801.755338875" lastFinishedPulling="2026-02-16 13:45:20.44899199 +0000 UTC m=+809.513322691" observedRunningTime="2026-02-16 13:45:21.564190319 +0000 UTC m=+810.628521040" watchObservedRunningTime="2026-02-16 13:45:21.566113383 +0000 UTC m=+810.630444084" Feb 16 13:45:21 crc kubenswrapper[4812]: I0216 13:45:21.585894 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-bss4n" podStartSLOduration=3.100138721 podStartE2EDuration="10.585871038s" podCreationTimestamp="2026-02-16 13:45:11 +0000 UTC" firstStartedPulling="2026-02-16 13:45:12.812503517 +0000 UTC m=+801.876834218" lastFinishedPulling="2026-02-16 13:45:20.298235834 +0000 UTC m=+809.362566535" observedRunningTime="2026-02-16 13:45:21.578753668 +0000 UTC m=+810.643084389" watchObservedRunningTime="2026-02-16 13:45:21.585871038 +0000 UTC m=+810.650201739" Feb 16 13:45:26 crc kubenswrapper[4812]: I0216 13:45:26.975043 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-mb4rm" Feb 16 13:45:57 crc kubenswrapper[4812]: I0216 13:45:57.807079 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l9hm9"] Feb 16 13:45:57 crc kubenswrapper[4812]: I0216 13:45:57.808679 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l9hm9" Feb 16 13:45:57 crc kubenswrapper[4812]: I0216 13:45:57.810954 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 13:45:57 crc kubenswrapper[4812]: I0216 13:45:57.824959 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l9hm9"] Feb 16 13:45:57 crc kubenswrapper[4812]: I0216 13:45:57.880744 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a794509c-f142-4184-80c5-38d6095917df-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l9hm9\" (UID: \"a794509c-f142-4184-80c5-38d6095917df\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l9hm9" Feb 16 13:45:57 crc kubenswrapper[4812]: I0216 13:45:57.881053 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a794509c-f142-4184-80c5-38d6095917df-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l9hm9\" (UID: \"a794509c-f142-4184-80c5-38d6095917df\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l9hm9" Feb 16 13:45:57 crc kubenswrapper[4812]: I0216 13:45:57.881165 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mg5m\" (UniqueName: \"kubernetes.io/projected/a794509c-f142-4184-80c5-38d6095917df-kube-api-access-4mg5m\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l9hm9\" (UID: \"a794509c-f142-4184-80c5-38d6095917df\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l9hm9" Feb 16 13:45:57 crc kubenswrapper[4812]: I0216 13:45:57.982898 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a794509c-f142-4184-80c5-38d6095917df-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l9hm9\" (UID: \"a794509c-f142-4184-80c5-38d6095917df\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l9hm9" Feb 16 13:45:57 crc kubenswrapper[4812]: I0216 13:45:57.982949 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a794509c-f142-4184-80c5-38d6095917df-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l9hm9\" (UID: \"a794509c-f142-4184-80c5-38d6095917df\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l9hm9" Feb 16 13:45:57 crc kubenswrapper[4812]: I0216 13:45:57.982975 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mg5m\" (UniqueName: \"kubernetes.io/projected/a794509c-f142-4184-80c5-38d6095917df-kube-api-access-4mg5m\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l9hm9\" (UID: \"a794509c-f142-4184-80c5-38d6095917df\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l9hm9" Feb 16 13:45:57 crc kubenswrapper[4812]: I0216 13:45:57.983562 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a794509c-f142-4184-80c5-38d6095917df-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l9hm9\" (UID: \"a794509c-f142-4184-80c5-38d6095917df\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l9hm9" Feb 16 13:45:57 crc kubenswrapper[4812]: I0216 13:45:57.983687 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a794509c-f142-4184-80c5-38d6095917df-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l9hm9\" (UID: \"a794509c-f142-4184-80c5-38d6095917df\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l9hm9" Feb 16 13:45:58 crc kubenswrapper[4812]: I0216 13:45:58.003329 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mg5m\" (UniqueName: \"kubernetes.io/projected/a794509c-f142-4184-80c5-38d6095917df-kube-api-access-4mg5m\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l9hm9\" (UID: \"a794509c-f142-4184-80c5-38d6095917df\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l9hm9" Feb 16 13:45:58 crc kubenswrapper[4812]: I0216 13:45:58.122187 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l9hm9" Feb 16 13:45:58 crc kubenswrapper[4812]: I0216 13:45:58.601049 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l9hm9"] Feb 16 13:45:58 crc kubenswrapper[4812]: I0216 13:45:58.756555 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l9hm9" event={"ID":"a794509c-f142-4184-80c5-38d6095917df","Type":"ContainerStarted","Data":"b1358cda6bf881c17aa595dfa6b256e441478f052d605d0dd3b1cb3801c1d25f"} Feb 16 13:45:58 crc kubenswrapper[4812]: I0216 13:45:58.756901 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l9hm9" event={"ID":"a794509c-f142-4184-80c5-38d6095917df","Type":"ContainerStarted","Data":"7ba93bc45ba2bda9b14088d12ebef158d360d14f80fb042bda3bea940f2a926b"} Feb 16 13:45:59 crc kubenswrapper[4812]: I0216 13:45:59.763593 4812 generic.go:334] "Generic (PLEG): container finished" podID="a794509c-f142-4184-80c5-38d6095917df" containerID="b1358cda6bf881c17aa595dfa6b256e441478f052d605d0dd3b1cb3801c1d25f" exitCode=0 Feb 16 13:45:59 crc kubenswrapper[4812]: I0216 13:45:59.763641 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l9hm9" event={"ID":"a794509c-f142-4184-80c5-38d6095917df","Type":"ContainerDied","Data":"b1358cda6bf881c17aa595dfa6b256e441478f052d605d0dd3b1cb3801c1d25f"} Feb 16 13:46:00 crc kubenswrapper[4812]: I0216 13:46:00.174067 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vx2lk"] Feb 16 13:46:00 crc kubenswrapper[4812]: I0216 13:46:00.175676 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vx2lk" Feb 16 13:46:00 crc kubenswrapper[4812]: I0216 13:46:00.185043 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vx2lk"] Feb 16 13:46:00 crc kubenswrapper[4812]: I0216 13:46:00.191074 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Feb 16 13:46:00 crc kubenswrapper[4812]: I0216 13:46:00.192059 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 16 13:46:00 crc kubenswrapper[4812]: I0216 13:46:00.194135 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Feb 16 13:46:00 crc kubenswrapper[4812]: I0216 13:46:00.194240 4812 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-6lt74" Feb 16 13:46:00 crc kubenswrapper[4812]: I0216 13:46:00.194306 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Feb 16 13:46:00 crc kubenswrapper[4812]: I0216 13:46:00.210217 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce64d471-324b-443d-8ef0-b13ab7882905-utilities\") pod \"redhat-operators-vx2lk\" (UID: \"ce64d471-324b-443d-8ef0-b13ab7882905\") " pod="openshift-marketplace/redhat-operators-vx2lk" Feb 16 13:46:00 crc kubenswrapper[4812]: I0216 13:46:00.210322 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hp2p\" (UniqueName: \"kubernetes.io/projected/ce64d471-324b-443d-8ef0-b13ab7882905-kube-api-access-2hp2p\") pod \"redhat-operators-vx2lk\" (UID: \"ce64d471-324b-443d-8ef0-b13ab7882905\") " pod="openshift-marketplace/redhat-operators-vx2lk" Feb 16 13:46:00 crc kubenswrapper[4812]: I0216 13:46:00.210355 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce64d471-324b-443d-8ef0-b13ab7882905-catalog-content\") pod \"redhat-operators-vx2lk\" (UID: \"ce64d471-324b-443d-8ef0-b13ab7882905\") " pod="openshift-marketplace/redhat-operators-vx2lk" Feb 16 13:46:00 crc kubenswrapper[4812]: I0216 13:46:00.211123 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 16 13:46:00 crc kubenswrapper[4812]: I0216 13:46:00.311906 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgdk9\" (UniqueName: \"kubernetes.io/projected/0877e3ce-2822-459a-ac3b-74d4ba709895-kube-api-access-jgdk9\") pod \"minio\" (UID: \"0877e3ce-2822-459a-ac3b-74d4ba709895\") " pod="minio-dev/minio" Feb 16 13:46:00 crc kubenswrapper[4812]: I0216 13:46:00.312001 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce64d471-324b-443d-8ef0-b13ab7882905-utilities\") pod \"redhat-operators-vx2lk\" (UID: \"ce64d471-324b-443d-8ef0-b13ab7882905\") " pod="openshift-marketplace/redhat-operators-vx2lk" Feb 16 13:46:00 crc kubenswrapper[4812]: I0216 13:46:00.312035 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hp2p\" (UniqueName: \"kubernetes.io/projected/ce64d471-324b-443d-8ef0-b13ab7882905-kube-api-access-2hp2p\") pod \"redhat-operators-vx2lk\" (UID: \"ce64d471-324b-443d-8ef0-b13ab7882905\") " pod="openshift-marketplace/redhat-operators-vx2lk" Feb 16 13:46:00 crc kubenswrapper[4812]: I0216 13:46:00.312054 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce64d471-324b-443d-8ef0-b13ab7882905-catalog-content\") pod \"redhat-operators-vx2lk\" (UID: \"ce64d471-324b-443d-8ef0-b13ab7882905\") " pod="openshift-marketplace/redhat-operators-vx2lk" Feb 16 13:46:00 crc kubenswrapper[4812]: I0216 13:46:00.312106 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4a0c7616-5fd1-49aa-b5eb-b98db31c9866\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4a0c7616-5fd1-49aa-b5eb-b98db31c9866\") pod \"minio\" (UID: \"0877e3ce-2822-459a-ac3b-74d4ba709895\") " pod="minio-dev/minio" Feb 16 13:46:00 crc kubenswrapper[4812]: I0216 13:46:00.312985 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce64d471-324b-443d-8ef0-b13ab7882905-utilities\") pod \"redhat-operators-vx2lk\" (UID: \"ce64d471-324b-443d-8ef0-b13ab7882905\") " pod="openshift-marketplace/redhat-operators-vx2lk" Feb 16 13:46:00 crc kubenswrapper[4812]: I0216 13:46:00.313150 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce64d471-324b-443d-8ef0-b13ab7882905-catalog-content\") pod \"redhat-operators-vx2lk\" (UID: \"ce64d471-324b-443d-8ef0-b13ab7882905\") " pod="openshift-marketplace/redhat-operators-vx2lk" Feb 16 13:46:00 crc kubenswrapper[4812]: I0216 13:46:00.335709 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hp2p\" (UniqueName: \"kubernetes.io/projected/ce64d471-324b-443d-8ef0-b13ab7882905-kube-api-access-2hp2p\") pod \"redhat-operators-vx2lk\" (UID: \"ce64d471-324b-443d-8ef0-b13ab7882905\") " pod="openshift-marketplace/redhat-operators-vx2lk" Feb 16 13:46:00 crc kubenswrapper[4812]: I0216 13:46:00.413821 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4a0c7616-5fd1-49aa-b5eb-b98db31c9866\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4a0c7616-5fd1-49aa-b5eb-b98db31c9866\") pod \"minio\" (UID: \"0877e3ce-2822-459a-ac3b-74d4ba709895\") " pod="minio-dev/minio" Feb 16 13:46:00 crc kubenswrapper[4812]: I0216 13:46:00.413889 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgdk9\" (UniqueName: \"kubernetes.io/projected/0877e3ce-2822-459a-ac3b-74d4ba709895-kube-api-access-jgdk9\") pod \"minio\" (UID: \"0877e3ce-2822-459a-ac3b-74d4ba709895\") " pod="minio-dev/minio" Feb 16 13:46:00 crc kubenswrapper[4812]: I0216 13:46:00.438649 4812 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 13:46:00 crc kubenswrapper[4812]: I0216 13:46:00.438698 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4a0c7616-5fd1-49aa-b5eb-b98db31c9866\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4a0c7616-5fd1-49aa-b5eb-b98db31c9866\") pod \"minio\" (UID: \"0877e3ce-2822-459a-ac3b-74d4ba709895\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5d71136b1c447c9fc4b4f13fd3355af0772bf7f376f1c7019c8a1877a91fade1/globalmount\"" pod="minio-dev/minio" Feb 16 13:46:00 crc kubenswrapper[4812]: I0216 13:46:00.443106 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgdk9\" (UniqueName: \"kubernetes.io/projected/0877e3ce-2822-459a-ac3b-74d4ba709895-kube-api-access-jgdk9\") pod \"minio\" (UID: \"0877e3ce-2822-459a-ac3b-74d4ba709895\") " pod="minio-dev/minio" Feb 16 13:46:00 crc kubenswrapper[4812]: I0216 13:46:00.495741 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vx2lk" Feb 16 13:46:00 crc kubenswrapper[4812]: I0216 13:46:00.545350 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4a0c7616-5fd1-49aa-b5eb-b98db31c9866\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4a0c7616-5fd1-49aa-b5eb-b98db31c9866\") pod \"minio\" (UID: \"0877e3ce-2822-459a-ac3b-74d4ba709895\") " pod="minio-dev/minio" Feb 16 13:46:00 crc kubenswrapper[4812]: I0216 13:46:00.741791 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vx2lk"] Feb 16 13:46:00 crc kubenswrapper[4812]: W0216 13:46:00.747335 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce64d471_324b_443d_8ef0_b13ab7882905.slice/crio-a6a7bd3064cfe081d32ea255743f6fba110d657c35a8c12a72a240078cf677f1 WatchSource:0}: Error finding container a6a7bd3064cfe081d32ea255743f6fba110d657c35a8c12a72a240078cf677f1: Status 404 returned error can't find the container with id a6a7bd3064cfe081d32ea255743f6fba110d657c35a8c12a72a240078cf677f1 Feb 16 13:46:00 crc kubenswrapper[4812]: I0216 13:46:00.770899 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vx2lk" event={"ID":"ce64d471-324b-443d-8ef0-b13ab7882905","Type":"ContainerStarted","Data":"a6a7bd3064cfe081d32ea255743f6fba110d657c35a8c12a72a240078cf677f1"} Feb 16 13:46:00 crc kubenswrapper[4812]: I0216 13:46:00.819491 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 16 13:46:01 crc kubenswrapper[4812]: I0216 13:46:01.153235 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 16 13:46:01 crc kubenswrapper[4812]: W0216 13:46:01.161838 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0877e3ce_2822_459a_ac3b_74d4ba709895.slice/crio-57827b3cc232ff96a3ece6d7edafc737d1696912859f3bb33b2a702be1426d28 WatchSource:0}: Error finding container 57827b3cc232ff96a3ece6d7edafc737d1696912859f3bb33b2a702be1426d28: Status 404 returned error can't find the container with id 57827b3cc232ff96a3ece6d7edafc737d1696912859f3bb33b2a702be1426d28 Feb 16 13:46:01 crc kubenswrapper[4812]: I0216 13:46:01.780606 4812 generic.go:334] "Generic (PLEG): container finished" podID="ce64d471-324b-443d-8ef0-b13ab7882905" containerID="fe06c7d86232828306c395487239889a4a5aa43c55e0f6b372ec6ffe6d829501" exitCode=0 Feb 16 13:46:01 crc kubenswrapper[4812]: I0216 13:46:01.780830 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vx2lk" event={"ID":"ce64d471-324b-443d-8ef0-b13ab7882905","Type":"ContainerDied","Data":"fe06c7d86232828306c395487239889a4a5aa43c55e0f6b372ec6ffe6d829501"} Feb 16 13:46:01 crc kubenswrapper[4812]: I0216 13:46:01.783985 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"0877e3ce-2822-459a-ac3b-74d4ba709895","Type":"ContainerStarted","Data":"57827b3cc232ff96a3ece6d7edafc737d1696912859f3bb33b2a702be1426d28"} Feb 16 13:46:01 crc kubenswrapper[4812]: I0216 13:46:01.786349 4812 generic.go:334] "Generic (PLEG): container finished" podID="a794509c-f142-4184-80c5-38d6095917df" containerID="9955de8437b4eea3f9930ce5e7497b9ee47a0fef1686ece5cc4d1d492b5bd2ff" exitCode=0 Feb 16 13:46:01 crc kubenswrapper[4812]: I0216 13:46:01.786385 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l9hm9" event={"ID":"a794509c-f142-4184-80c5-38d6095917df","Type":"ContainerDied","Data":"9955de8437b4eea3f9930ce5e7497b9ee47a0fef1686ece5cc4d1d492b5bd2ff"} Feb 16 13:46:02 crc kubenswrapper[4812]: I0216 13:46:02.805044 4812 generic.go:334] "Generic (PLEG): container finished" podID="a794509c-f142-4184-80c5-38d6095917df" containerID="d1b46d7332909f42b5dd0cb71af9294db42dfddd8685a2f7bce3029b4667ae59" exitCode=0 Feb 16 13:46:02 crc kubenswrapper[4812]: I0216 13:46:02.805149 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l9hm9" event={"ID":"a794509c-f142-4184-80c5-38d6095917df","Type":"ContainerDied","Data":"d1b46d7332909f42b5dd0cb71af9294db42dfddd8685a2f7bce3029b4667ae59"} Feb 16 13:46:04 crc kubenswrapper[4812]: I0216 13:46:04.837205 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vx2lk" event={"ID":"ce64d471-324b-443d-8ef0-b13ab7882905","Type":"ContainerStarted","Data":"23b3300c9fb0fd2788ad46531c27101d5f84b2956f80b9157d56568f75f9dc8e"} Feb 16 13:46:05 crc kubenswrapper[4812]: I0216 13:46:05.143343 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l9hm9" Feb 16 13:46:05 crc kubenswrapper[4812]: I0216 13:46:05.313089 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a794509c-f142-4184-80c5-38d6095917df-util\") pod \"a794509c-f142-4184-80c5-38d6095917df\" (UID: \"a794509c-f142-4184-80c5-38d6095917df\") " Feb 16 13:46:05 crc kubenswrapper[4812]: I0216 13:46:05.313159 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a794509c-f142-4184-80c5-38d6095917df-bundle\") pod \"a794509c-f142-4184-80c5-38d6095917df\" (UID: \"a794509c-f142-4184-80c5-38d6095917df\") " Feb 16 13:46:05 crc kubenswrapper[4812]: I0216 13:46:05.313228 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mg5m\" (UniqueName: \"kubernetes.io/projected/a794509c-f142-4184-80c5-38d6095917df-kube-api-access-4mg5m\") pod \"a794509c-f142-4184-80c5-38d6095917df\" (UID: \"a794509c-f142-4184-80c5-38d6095917df\") " Feb 16 13:46:05 crc kubenswrapper[4812]: I0216 13:46:05.318587 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a794509c-f142-4184-80c5-38d6095917df-kube-api-access-4mg5m" (OuterVolumeSpecName: "kube-api-access-4mg5m") pod "a794509c-f142-4184-80c5-38d6095917df" (UID: "a794509c-f142-4184-80c5-38d6095917df"). InnerVolumeSpecName "kube-api-access-4mg5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:46:05 crc kubenswrapper[4812]: I0216 13:46:05.326188 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a794509c-f142-4184-80c5-38d6095917df-bundle" (OuterVolumeSpecName: "bundle") pod "a794509c-f142-4184-80c5-38d6095917df" (UID: "a794509c-f142-4184-80c5-38d6095917df"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:46:05 crc kubenswrapper[4812]: I0216 13:46:05.331532 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a794509c-f142-4184-80c5-38d6095917df-util" (OuterVolumeSpecName: "util") pod "a794509c-f142-4184-80c5-38d6095917df" (UID: "a794509c-f142-4184-80c5-38d6095917df"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:46:05 crc kubenswrapper[4812]: I0216 13:46:05.414655 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mg5m\" (UniqueName: \"kubernetes.io/projected/a794509c-f142-4184-80c5-38d6095917df-kube-api-access-4mg5m\") on node \"crc\" DevicePath \"\"" Feb 16 13:46:05 crc kubenswrapper[4812]: I0216 13:46:05.414688 4812 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a794509c-f142-4184-80c5-38d6095917df-util\") on node \"crc\" DevicePath \"\"" Feb 16 13:46:05 crc kubenswrapper[4812]: I0216 13:46:05.414701 4812 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a794509c-f142-4184-80c5-38d6095917df-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:46:05 crc kubenswrapper[4812]: I0216 13:46:05.847359 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l9hm9" Feb 16 13:46:05 crc kubenswrapper[4812]: I0216 13:46:05.847380 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l9hm9" event={"ID":"a794509c-f142-4184-80c5-38d6095917df","Type":"ContainerDied","Data":"7ba93bc45ba2bda9b14088d12ebef158d360d14f80fb042bda3bea940f2a926b"} Feb 16 13:46:05 crc kubenswrapper[4812]: I0216 13:46:05.847482 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ba93bc45ba2bda9b14088d12ebef158d360d14f80fb042bda3bea940f2a926b" Feb 16 13:46:06 crc kubenswrapper[4812]: I0216 13:46:06.950001 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"0877e3ce-2822-459a-ac3b-74d4ba709895","Type":"ContainerStarted","Data":"e826b918dddc96313e6920334b2e7d4d4c390f4be5c63b9c9fa4e578293b42e4"} Feb 16 13:46:07 crc kubenswrapper[4812]: I0216 13:46:07.003140 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=5.042438045 podStartE2EDuration="10.003108982s" podCreationTimestamp="2026-02-16 13:45:57 +0000 UTC" firstStartedPulling="2026-02-16 13:46:01.164081676 +0000 UTC m=+850.228412377" lastFinishedPulling="2026-02-16 13:46:06.124752613 +0000 UTC m=+855.189083314" observedRunningTime="2026-02-16 13:46:07.00200323 +0000 UTC m=+856.066333931" watchObservedRunningTime="2026-02-16 13:46:07.003108982 +0000 UTC m=+856.067439683" Feb 16 13:46:09 crc kubenswrapper[4812]: I0216 13:46:09.020905 4812 generic.go:334] "Generic (PLEG): container finished" podID="ce64d471-324b-443d-8ef0-b13ab7882905" containerID="23b3300c9fb0fd2788ad46531c27101d5f84b2956f80b9157d56568f75f9dc8e" exitCode=0 Feb 16 13:46:09 crc kubenswrapper[4812]: I0216 13:46:09.020933 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vx2lk" event={"ID":"ce64d471-324b-443d-8ef0-b13ab7882905","Type":"ContainerDied","Data":"23b3300c9fb0fd2788ad46531c27101d5f84b2956f80b9157d56568f75f9dc8e"} Feb 16 13:46:10 crc kubenswrapper[4812]: I0216 13:46:10.046802 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vx2lk" event={"ID":"ce64d471-324b-443d-8ef0-b13ab7882905","Type":"ContainerStarted","Data":"b66b1ebbbd3a11fe13a6cf2155fc2a9be48ca03a951a818280321f508993968a"} Feb 16 13:46:10 crc kubenswrapper[4812]: I0216 13:46:10.085374 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vx2lk" podStartSLOduration=2.199719402 podStartE2EDuration="10.085351698s" podCreationTimestamp="2026-02-16 13:46:00 +0000 UTC" firstStartedPulling="2026-02-16 13:46:01.782505941 +0000 UTC m=+850.846836642" lastFinishedPulling="2026-02-16 13:46:09.668138237 +0000 UTC m=+858.732468938" observedRunningTime="2026-02-16 13:46:10.070807492 +0000 UTC m=+859.135138203" watchObservedRunningTime="2026-02-16 13:46:10.085351698 +0000 UTC m=+859.149682419" Feb 16 13:46:10 crc kubenswrapper[4812]: I0216 13:46:10.496312 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vx2lk" Feb 16 13:46:10 crc kubenswrapper[4812]: I0216 13:46:10.496377 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vx2lk" Feb 16 13:46:11 crc kubenswrapper[4812]: I0216 13:46:11.824839 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vx2lk" podUID="ce64d471-324b-443d-8ef0-b13ab7882905" containerName="registry-server" probeResult="failure" output=< Feb 16 13:46:11 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Feb 16 13:46:11 crc kubenswrapper[4812]: > Feb 16 13:46:12 crc kubenswrapper[4812]: I0216 13:46:12.730381 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-7db4b9ddb7-grxq9"] Feb 16 13:46:12 crc kubenswrapper[4812]: E0216 13:46:12.730611 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a794509c-f142-4184-80c5-38d6095917df" containerName="pull" Feb 16 13:46:12 crc kubenswrapper[4812]: I0216 13:46:12.730624 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a794509c-f142-4184-80c5-38d6095917df" containerName="pull" Feb 16 13:46:12 crc kubenswrapper[4812]: E0216 13:46:12.730638 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a794509c-f142-4184-80c5-38d6095917df" containerName="extract" Feb 16 13:46:12 crc kubenswrapper[4812]: I0216 13:46:12.730644 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a794509c-f142-4184-80c5-38d6095917df" containerName="extract" Feb 16 13:46:12 crc kubenswrapper[4812]: E0216 13:46:12.730658 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a794509c-f142-4184-80c5-38d6095917df" containerName="util" Feb 16 13:46:12 crc kubenswrapper[4812]: I0216 13:46:12.730665 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a794509c-f142-4184-80c5-38d6095917df" containerName="util" Feb 16 13:46:12 crc kubenswrapper[4812]: I0216 13:46:12.730760 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a794509c-f142-4184-80c5-38d6095917df" containerName="extract" Feb 16 13:46:12 crc kubenswrapper[4812]: I0216 13:46:12.731311 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-7db4b9ddb7-grxq9" Feb 16 13:46:12 crc kubenswrapper[4812]: I0216 13:46:12.734980 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-n4b4m" Feb 16 13:46:12 crc kubenswrapper[4812]: I0216 13:46:12.736774 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Feb 16 13:46:12 crc kubenswrapper[4812]: I0216 13:46:12.736898 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Feb 16 13:46:12 crc kubenswrapper[4812]: I0216 13:46:12.736967 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Feb 16 13:46:12 crc kubenswrapper[4812]: I0216 13:46:12.736898 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Feb 16 13:46:12 crc kubenswrapper[4812]: I0216 13:46:12.738233 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Feb 16 13:46:12 crc kubenswrapper[4812]: I0216 13:46:12.752296 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-7db4b9ddb7-grxq9"] Feb 16 13:46:12 crc kubenswrapper[4812]: I0216 13:46:12.863164 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/f8571da4-b4fd-4d36-923e-f0924cb993e9-manager-config\") pod \"loki-operator-controller-manager-7db4b9ddb7-grxq9\" (UID: \"f8571da4-b4fd-4d36-923e-f0924cb993e9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7db4b9ddb7-grxq9" Feb 16 13:46:12 crc kubenswrapper[4812]: I0216 13:46:12.863231 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f8571da4-b4fd-4d36-923e-f0924cb993e9-webhook-cert\") pod \"loki-operator-controller-manager-7db4b9ddb7-grxq9\" (UID: \"f8571da4-b4fd-4d36-923e-f0924cb993e9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7db4b9ddb7-grxq9" Feb 16 13:46:12 crc kubenswrapper[4812]: I0216 13:46:12.863257 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f8571da4-b4fd-4d36-923e-f0924cb993e9-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-7db4b9ddb7-grxq9\" (UID: \"f8571da4-b4fd-4d36-923e-f0924cb993e9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7db4b9ddb7-grxq9" Feb 16 13:46:12 crc kubenswrapper[4812]: I0216 13:46:12.863302 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d22cr\" (UniqueName: \"kubernetes.io/projected/f8571da4-b4fd-4d36-923e-f0924cb993e9-kube-api-access-d22cr\") pod \"loki-operator-controller-manager-7db4b9ddb7-grxq9\" (UID: \"f8571da4-b4fd-4d36-923e-f0924cb993e9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7db4b9ddb7-grxq9" Feb 16 13:46:12 crc kubenswrapper[4812]: I0216 13:46:12.863326 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f8571da4-b4fd-4d36-923e-f0924cb993e9-apiservice-cert\") pod \"loki-operator-controller-manager-7db4b9ddb7-grxq9\" (UID: \"f8571da4-b4fd-4d36-923e-f0924cb993e9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7db4b9ddb7-grxq9" Feb 16 13:46:12 crc kubenswrapper[4812]: I0216 13:46:12.964065 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/f8571da4-b4fd-4d36-923e-f0924cb993e9-manager-config\") pod \"loki-operator-controller-manager-7db4b9ddb7-grxq9\" (UID: \"f8571da4-b4fd-4d36-923e-f0924cb993e9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7db4b9ddb7-grxq9" Feb 16 13:46:12 crc kubenswrapper[4812]: I0216 13:46:12.964157 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f8571da4-b4fd-4d36-923e-f0924cb993e9-webhook-cert\") pod \"loki-operator-controller-manager-7db4b9ddb7-grxq9\" (UID: \"f8571da4-b4fd-4d36-923e-f0924cb993e9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7db4b9ddb7-grxq9" Feb 16 13:46:12 crc kubenswrapper[4812]: I0216 13:46:12.964188 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f8571da4-b4fd-4d36-923e-f0924cb993e9-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-7db4b9ddb7-grxq9\" (UID: \"f8571da4-b4fd-4d36-923e-f0924cb993e9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7db4b9ddb7-grxq9" Feb 16 13:46:12 crc kubenswrapper[4812]: I0216 13:46:12.964236 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d22cr\" (UniqueName: \"kubernetes.io/projected/f8571da4-b4fd-4d36-923e-f0924cb993e9-kube-api-access-d22cr\") pod \"loki-operator-controller-manager-7db4b9ddb7-grxq9\" (UID: \"f8571da4-b4fd-4d36-923e-f0924cb993e9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7db4b9ddb7-grxq9" Feb 16 13:46:12 crc kubenswrapper[4812]: I0216 13:46:12.964258 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f8571da4-b4fd-4d36-923e-f0924cb993e9-apiservice-cert\") pod \"loki-operator-controller-manager-7db4b9ddb7-grxq9\" (UID: \"f8571da4-b4fd-4d36-923e-f0924cb993e9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7db4b9ddb7-grxq9" Feb 16 13:46:12 crc kubenswrapper[4812]: I0216 13:46:12.965208 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/f8571da4-b4fd-4d36-923e-f0924cb993e9-manager-config\") pod \"loki-operator-controller-manager-7db4b9ddb7-grxq9\" (UID: \"f8571da4-b4fd-4d36-923e-f0924cb993e9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7db4b9ddb7-grxq9" Feb 16 13:46:12 crc kubenswrapper[4812]: I0216 13:46:12.972395 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f8571da4-b4fd-4d36-923e-f0924cb993e9-webhook-cert\") pod \"loki-operator-controller-manager-7db4b9ddb7-grxq9\" (UID: \"f8571da4-b4fd-4d36-923e-f0924cb993e9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7db4b9ddb7-grxq9" Feb 16 13:46:12 crc kubenswrapper[4812]: I0216 13:46:12.972413 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f8571da4-b4fd-4d36-923e-f0924cb993e9-apiservice-cert\") pod \"loki-operator-controller-manager-7db4b9ddb7-grxq9\" (UID: \"f8571da4-b4fd-4d36-923e-f0924cb993e9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7db4b9ddb7-grxq9" Feb 16 13:46:12 crc kubenswrapper[4812]: I0216 13:46:12.972967 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f8571da4-b4fd-4d36-923e-f0924cb993e9-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-7db4b9ddb7-grxq9\" (UID: \"f8571da4-b4fd-4d36-923e-f0924cb993e9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7db4b9ddb7-grxq9" Feb 16 13:46:13 crc kubenswrapper[4812]: I0216 13:46:13.096561 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d22cr\" (UniqueName: \"kubernetes.io/projected/f8571da4-b4fd-4d36-923e-f0924cb993e9-kube-api-access-d22cr\") pod \"loki-operator-controller-manager-7db4b9ddb7-grxq9\" (UID: \"f8571da4-b4fd-4d36-923e-f0924cb993e9\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7db4b9ddb7-grxq9" Feb 16 13:46:13 crc kubenswrapper[4812]: I0216 13:46:13.473850 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-7db4b9ddb7-grxq9" Feb 16 13:46:14 crc kubenswrapper[4812]: I0216 13:46:14.540996 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-7db4b9ddb7-grxq9"] Feb 16 13:46:14 crc kubenswrapper[4812]: W0216 13:46:14.562552 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8571da4_b4fd_4d36_923e_f0924cb993e9.slice/crio-3f2868c37cfe309f25fe4f2eb5d658d7d785933410bad1b1de398f498c71c445 WatchSource:0}: Error finding container 3f2868c37cfe309f25fe4f2eb5d658d7d785933410bad1b1de398f498c71c445: Status 404 returned error can't find the container with id 3f2868c37cfe309f25fe4f2eb5d658d7d785933410bad1b1de398f498c71c445 Feb 16 13:46:15 crc kubenswrapper[4812]: I0216 13:46:15.111046 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-7db4b9ddb7-grxq9" event={"ID":"f8571da4-b4fd-4d36-923e-f0924cb993e9","Type":"ContainerStarted","Data":"3f2868c37cfe309f25fe4f2eb5d658d7d785933410bad1b1de398f498c71c445"} Feb 16 13:46:21 crc kubenswrapper[4812]: I0216 13:46:21.647658 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vx2lk" podUID="ce64d471-324b-443d-8ef0-b13ab7882905" containerName="registry-server" probeResult="failure" output=< Feb 16 13:46:21 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Feb 16 13:46:21 crc kubenswrapper[4812]: > Feb 16 13:46:24 crc kubenswrapper[4812]: I0216 13:46:24.277850 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-7db4b9ddb7-grxq9" event={"ID":"f8571da4-b4fd-4d36-923e-f0924cb993e9","Type":"ContainerStarted","Data":"1807f01fd0020ae0bd66be2dca152d5c933cc912cc6d3c7b9f5f7c1ca93f4323"} Feb 16 13:46:30 crc kubenswrapper[4812]: I0216 13:46:30.555650 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vx2lk" Feb 16 13:46:30 crc kubenswrapper[4812]: I0216 13:46:30.621464 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vx2lk" Feb 16 13:46:32 crc kubenswrapper[4812]: I0216 13:46:32.963407 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vx2lk"] Feb 16 13:46:32 crc kubenswrapper[4812]: I0216 13:46:32.964131 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vx2lk" podUID="ce64d471-324b-443d-8ef0-b13ab7882905" containerName="registry-server" containerID="cri-o://b66b1ebbbd3a11fe13a6cf2155fc2a9be48ca03a951a818280321f508993968a" gracePeriod=2 Feb 16 13:46:33 crc kubenswrapper[4812]: I0216 13:46:33.356743 4812 generic.go:334] "Generic (PLEG): container finished" podID="ce64d471-324b-443d-8ef0-b13ab7882905" containerID="b66b1ebbbd3a11fe13a6cf2155fc2a9be48ca03a951a818280321f508993968a" exitCode=0 Feb 16 13:46:33 crc kubenswrapper[4812]: I0216 13:46:33.356798 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vx2lk" event={"ID":"ce64d471-324b-443d-8ef0-b13ab7882905","Type":"ContainerDied","Data":"b66b1ebbbd3a11fe13a6cf2155fc2a9be48ca03a951a818280321f508993968a"} Feb 16 13:46:33 crc kubenswrapper[4812]: I0216 13:46:33.925997 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vx2lk" Feb 16 13:46:33 crc kubenswrapper[4812]: I0216 13:46:33.998532 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hp2p\" (UniqueName: \"kubernetes.io/projected/ce64d471-324b-443d-8ef0-b13ab7882905-kube-api-access-2hp2p\") pod \"ce64d471-324b-443d-8ef0-b13ab7882905\" (UID: \"ce64d471-324b-443d-8ef0-b13ab7882905\") " Feb 16 13:46:33 crc kubenswrapper[4812]: I0216 13:46:33.998619 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce64d471-324b-443d-8ef0-b13ab7882905-catalog-content\") pod \"ce64d471-324b-443d-8ef0-b13ab7882905\" (UID: \"ce64d471-324b-443d-8ef0-b13ab7882905\") " Feb 16 13:46:33 crc kubenswrapper[4812]: I0216 13:46:33.998715 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce64d471-324b-443d-8ef0-b13ab7882905-utilities\") pod \"ce64d471-324b-443d-8ef0-b13ab7882905\" (UID: \"ce64d471-324b-443d-8ef0-b13ab7882905\") " Feb 16 13:46:34 crc kubenswrapper[4812]: I0216 13:46:33.999790 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce64d471-324b-443d-8ef0-b13ab7882905-utilities" (OuterVolumeSpecName: "utilities") pod "ce64d471-324b-443d-8ef0-b13ab7882905" (UID: "ce64d471-324b-443d-8ef0-b13ab7882905"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:46:34 crc kubenswrapper[4812]: I0216 13:46:34.003610 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce64d471-324b-443d-8ef0-b13ab7882905-kube-api-access-2hp2p" (OuterVolumeSpecName: "kube-api-access-2hp2p") pod "ce64d471-324b-443d-8ef0-b13ab7882905" (UID: "ce64d471-324b-443d-8ef0-b13ab7882905"). InnerVolumeSpecName "kube-api-access-2hp2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:46:34 crc kubenswrapper[4812]: I0216 13:46:34.100526 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce64d471-324b-443d-8ef0-b13ab7882905-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:46:34 crc kubenswrapper[4812]: I0216 13:46:34.100604 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hp2p\" (UniqueName: \"kubernetes.io/projected/ce64d471-324b-443d-8ef0-b13ab7882905-kube-api-access-2hp2p\") on node \"crc\" DevicePath \"\"" Feb 16 13:46:34 crc kubenswrapper[4812]: I0216 13:46:34.115187 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce64d471-324b-443d-8ef0-b13ab7882905-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce64d471-324b-443d-8ef0-b13ab7882905" (UID: "ce64d471-324b-443d-8ef0-b13ab7882905"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:46:34 crc kubenswrapper[4812]: I0216 13:46:34.201923 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce64d471-324b-443d-8ef0-b13ab7882905-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:46:34 crc kubenswrapper[4812]: I0216 13:46:34.364033 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-7db4b9ddb7-grxq9" event={"ID":"f8571da4-b4fd-4d36-923e-f0924cb993e9","Type":"ContainerStarted","Data":"7471026e40e4dc23275818504086c96dc1d6b63a3619819513d85b7dde94a1c3"} Feb 16 13:46:34 crc kubenswrapper[4812]: I0216 13:46:34.364413 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-7db4b9ddb7-grxq9" Feb 16 13:46:34 crc kubenswrapper[4812]: I0216 13:46:34.366179 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vx2lk" event={"ID":"ce64d471-324b-443d-8ef0-b13ab7882905","Type":"ContainerDied","Data":"a6a7bd3064cfe081d32ea255743f6fba110d657c35a8c12a72a240078cf677f1"} Feb 16 13:46:34 crc kubenswrapper[4812]: I0216 13:46:34.366251 4812 scope.go:117] "RemoveContainer" containerID="b66b1ebbbd3a11fe13a6cf2155fc2a9be48ca03a951a818280321f508993968a" Feb 16 13:46:34 crc kubenswrapper[4812]: I0216 13:46:34.366375 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vx2lk" Feb 16 13:46:34 crc kubenswrapper[4812]: I0216 13:46:34.366415 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-7db4b9ddb7-grxq9" Feb 16 13:46:34 crc kubenswrapper[4812]: I0216 13:46:34.382606 4812 scope.go:117] "RemoveContainer" containerID="23b3300c9fb0fd2788ad46531c27101d5f84b2956f80b9157d56568f75f9dc8e" Feb 16 13:46:34 crc kubenswrapper[4812]: I0216 13:46:34.390367 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-7db4b9ddb7-grxq9" podStartSLOduration=3.524623076 podStartE2EDuration="22.390346361s" podCreationTimestamp="2026-02-16 13:46:12 +0000 UTC" firstStartedPulling="2026-02-16 13:46:14.564698828 +0000 UTC m=+863.629029529" lastFinishedPulling="2026-02-16 13:46:33.430422113 +0000 UTC m=+882.494752814" observedRunningTime="2026-02-16 13:46:34.388762354 +0000 UTC m=+883.453093065" watchObservedRunningTime="2026-02-16 13:46:34.390346361 +0000 UTC m=+883.454677052" Feb 16 13:46:34 crc kubenswrapper[4812]: I0216 13:46:34.419117 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vx2lk"] Feb 16 13:46:34 crc kubenswrapper[4812]: I0216 13:46:34.420081 4812 scope.go:117] "RemoveContainer" containerID="fe06c7d86232828306c395487239889a4a5aa43c55e0f6b372ec6ffe6d829501" Feb 16 13:46:34 crc kubenswrapper[4812]: I0216 13:46:34.426757 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vx2lk"] Feb 16 13:46:35 crc kubenswrapper[4812]: I0216 13:46:35.885994 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce64d471-324b-443d-8ef0-b13ab7882905" path="/var/lib/kubelet/pods/ce64d471-324b-443d-8ef0-b13ab7882905/volumes" Feb 16 13:46:47 crc kubenswrapper[4812]: I0216 13:46:47.382892 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6pmzj"] Feb 16 13:46:47 crc kubenswrapper[4812]: E0216 13:46:47.383678 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce64d471-324b-443d-8ef0-b13ab7882905" containerName="extract-utilities" Feb 16 13:46:47 crc kubenswrapper[4812]: I0216 13:46:47.383692 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce64d471-324b-443d-8ef0-b13ab7882905" containerName="extract-utilities" Feb 16 13:46:47 crc kubenswrapper[4812]: E0216 13:46:47.383703 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce64d471-324b-443d-8ef0-b13ab7882905" containerName="registry-server" Feb 16 13:46:47 crc kubenswrapper[4812]: I0216 13:46:47.383709 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce64d471-324b-443d-8ef0-b13ab7882905" containerName="registry-server" Feb 16 13:46:47 crc kubenswrapper[4812]: E0216 13:46:47.383730 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce64d471-324b-443d-8ef0-b13ab7882905" containerName="extract-content" Feb 16 13:46:47 crc kubenswrapper[4812]: I0216 13:46:47.383737 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce64d471-324b-443d-8ef0-b13ab7882905" containerName="extract-content" Feb 16 13:46:47 crc kubenswrapper[4812]: I0216 13:46:47.383841 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce64d471-324b-443d-8ef0-b13ab7882905" containerName="registry-server" Feb 16 13:46:47 crc kubenswrapper[4812]: I0216 13:46:47.384628 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6pmzj" Feb 16 13:46:47 crc kubenswrapper[4812]: I0216 13:46:47.400618 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6pmzj"] Feb 16 13:46:47 crc kubenswrapper[4812]: I0216 13:46:47.465579 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9be726ee-091f-423d-a097-73360c0e9f81-utilities\") pod \"community-operators-6pmzj\" (UID: \"9be726ee-091f-423d-a097-73360c0e9f81\") " pod="openshift-marketplace/community-operators-6pmzj" Feb 16 13:46:47 crc kubenswrapper[4812]: I0216 13:46:47.465646 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9be726ee-091f-423d-a097-73360c0e9f81-catalog-content\") pod \"community-operators-6pmzj\" (UID: \"9be726ee-091f-423d-a097-73360c0e9f81\") " pod="openshift-marketplace/community-operators-6pmzj" Feb 16 13:46:47 crc kubenswrapper[4812]: I0216 13:46:47.465684 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrz4r\" (UniqueName: \"kubernetes.io/projected/9be726ee-091f-423d-a097-73360c0e9f81-kube-api-access-qrz4r\") pod \"community-operators-6pmzj\" (UID: \"9be726ee-091f-423d-a097-73360c0e9f81\") " pod="openshift-marketplace/community-operators-6pmzj" Feb 16 13:46:47 crc kubenswrapper[4812]: I0216 13:46:47.567472 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9be726ee-091f-423d-a097-73360c0e9f81-utilities\") pod \"community-operators-6pmzj\" (UID: \"9be726ee-091f-423d-a097-73360c0e9f81\") " pod="openshift-marketplace/community-operators-6pmzj" Feb 16 13:46:47 crc kubenswrapper[4812]: I0216 13:46:47.568094 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9be726ee-091f-423d-a097-73360c0e9f81-catalog-content\") pod \"community-operators-6pmzj\" (UID: \"9be726ee-091f-423d-a097-73360c0e9f81\") " pod="openshift-marketplace/community-operators-6pmzj" Feb 16 13:46:47 crc kubenswrapper[4812]: I0216 13:46:47.568163 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrz4r\" (UniqueName: \"kubernetes.io/projected/9be726ee-091f-423d-a097-73360c0e9f81-kube-api-access-qrz4r\") pod \"community-operators-6pmzj\" (UID: \"9be726ee-091f-423d-a097-73360c0e9f81\") " pod="openshift-marketplace/community-operators-6pmzj" Feb 16 13:46:47 crc kubenswrapper[4812]: I0216 13:46:47.568196 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9be726ee-091f-423d-a097-73360c0e9f81-utilities\") pod \"community-operators-6pmzj\" (UID: \"9be726ee-091f-423d-a097-73360c0e9f81\") " pod="openshift-marketplace/community-operators-6pmzj" Feb 16 13:46:47 crc kubenswrapper[4812]: I0216 13:46:47.568560 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9be726ee-091f-423d-a097-73360c0e9f81-catalog-content\") pod \"community-operators-6pmzj\" (UID: \"9be726ee-091f-423d-a097-73360c0e9f81\") " pod="openshift-marketplace/community-operators-6pmzj" Feb 16 13:46:47 crc kubenswrapper[4812]: I0216 13:46:47.592658 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrz4r\" (UniqueName: \"kubernetes.io/projected/9be726ee-091f-423d-a097-73360c0e9f81-kube-api-access-qrz4r\") pod \"community-operators-6pmzj\" (UID: \"9be726ee-091f-423d-a097-73360c0e9f81\") " pod="openshift-marketplace/community-operators-6pmzj" Feb 16 13:46:47 crc kubenswrapper[4812]: I0216 13:46:47.708060 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6pmzj" Feb 16 13:46:47 crc kubenswrapper[4812]: I0216 13:46:47.987620 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6pmzj"] Feb 16 13:46:48 crc kubenswrapper[4812]: I0216 13:46:48.446103 4812 generic.go:334] "Generic (PLEG): container finished" podID="9be726ee-091f-423d-a097-73360c0e9f81" containerID="ac7dcaaf0c571e5dee75789e8af21b12f66f86c42df7a96380939e8f33903ef5" exitCode=0 Feb 16 13:46:48 crc kubenswrapper[4812]: I0216 13:46:48.446167 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6pmzj" event={"ID":"9be726ee-091f-423d-a097-73360c0e9f81","Type":"ContainerDied","Data":"ac7dcaaf0c571e5dee75789e8af21b12f66f86c42df7a96380939e8f33903ef5"} Feb 16 13:46:48 crc kubenswrapper[4812]: I0216 13:46:48.446230 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6pmzj" event={"ID":"9be726ee-091f-423d-a097-73360c0e9f81","Type":"ContainerStarted","Data":"bf7196a4c7908889b8a33f751146284753df9ac71bfc921f7cac511ed1f65591"} Feb 16 13:46:49 crc kubenswrapper[4812]: I0216 13:46:49.452992 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6pmzj" event={"ID":"9be726ee-091f-423d-a097-73360c0e9f81","Type":"ContainerStarted","Data":"4c963e81da8361e2d910ed2d49ab6515e95e5aec33ae6b5415a737724ae75d74"} Feb 16 13:46:50 crc kubenswrapper[4812]: I0216 13:46:50.460980 4812 generic.go:334] "Generic (PLEG): container finished" podID="9be726ee-091f-423d-a097-73360c0e9f81" containerID="4c963e81da8361e2d910ed2d49ab6515e95e5aec33ae6b5415a737724ae75d74" exitCode=0 Feb 16 13:46:50 crc kubenswrapper[4812]: I0216 13:46:50.461048 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6pmzj" event={"ID":"9be726ee-091f-423d-a097-73360c0e9f81","Type":"ContainerDied","Data":"4c963e81da8361e2d910ed2d49ab6515e95e5aec33ae6b5415a737724ae75d74"} Feb 16 13:46:51 crc kubenswrapper[4812]: I0216 13:46:51.468955 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6pmzj" event={"ID":"9be726ee-091f-423d-a097-73360c0e9f81","Type":"ContainerStarted","Data":"5d34a066a7c78e6581daa21e1c9bdfce73afb6e30f0e8715d176fb5497bb663e"} Feb 16 13:46:51 crc kubenswrapper[4812]: I0216 13:46:51.488595 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6pmzj" podStartSLOduration=1.842574556 podStartE2EDuration="4.488530202s" podCreationTimestamp="2026-02-16 13:46:47 +0000 UTC" firstStartedPulling="2026-02-16 13:46:48.448082991 +0000 UTC m=+897.512413692" lastFinishedPulling="2026-02-16 13:46:51.094038627 +0000 UTC m=+900.158369338" observedRunningTime="2026-02-16 13:46:51.486615756 +0000 UTC m=+900.550946467" watchObservedRunningTime="2026-02-16 13:46:51.488530202 +0000 UTC m=+900.552860903" Feb 16 13:46:57 crc kubenswrapper[4812]: I0216 13:46:57.708613 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6pmzj" Feb 16 13:46:57 crc kubenswrapper[4812]: I0216 13:46:57.709235 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6pmzj" Feb 16 13:46:57 crc kubenswrapper[4812]: I0216 13:46:57.751727 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6pmzj" Feb 16 13:46:58 crc kubenswrapper[4812]: I0216 13:46:58.547079 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6pmzj" Feb 16 13:46:58 crc kubenswrapper[4812]: I0216 13:46:58.586762 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6pmzj"] Feb 16 13:47:00 crc kubenswrapper[4812]: I0216 13:47:00.520042 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6pmzj" podUID="9be726ee-091f-423d-a097-73360c0e9f81" containerName="registry-server" containerID="cri-o://5d34a066a7c78e6581daa21e1c9bdfce73afb6e30f0e8715d176fb5497bb663e" gracePeriod=2 Feb 16 13:47:00 crc kubenswrapper[4812]: I0216 13:47:00.914232 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6pmzj" Feb 16 13:47:00 crc kubenswrapper[4812]: I0216 13:47:00.947697 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9be726ee-091f-423d-a097-73360c0e9f81-utilities\") pod \"9be726ee-091f-423d-a097-73360c0e9f81\" (UID: \"9be726ee-091f-423d-a097-73360c0e9f81\") " Feb 16 13:47:00 crc kubenswrapper[4812]: I0216 13:47:00.947795 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrz4r\" (UniqueName: \"kubernetes.io/projected/9be726ee-091f-423d-a097-73360c0e9f81-kube-api-access-qrz4r\") pod \"9be726ee-091f-423d-a097-73360c0e9f81\" (UID: \"9be726ee-091f-423d-a097-73360c0e9f81\") " Feb 16 13:47:00 crc kubenswrapper[4812]: I0216 13:47:00.947874 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9be726ee-091f-423d-a097-73360c0e9f81-catalog-content\") pod \"9be726ee-091f-423d-a097-73360c0e9f81\" (UID: \"9be726ee-091f-423d-a097-73360c0e9f81\") " Feb 16 13:47:00 crc kubenswrapper[4812]: I0216 13:47:00.949541 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9be726ee-091f-423d-a097-73360c0e9f81-utilities" (OuterVolumeSpecName: "utilities") pod "9be726ee-091f-423d-a097-73360c0e9f81" (UID: "9be726ee-091f-423d-a097-73360c0e9f81"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:47:00 crc kubenswrapper[4812]: I0216 13:47:00.961409 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9be726ee-091f-423d-a097-73360c0e9f81-kube-api-access-qrz4r" (OuterVolumeSpecName: "kube-api-access-qrz4r") pod "9be726ee-091f-423d-a097-73360c0e9f81" (UID: "9be726ee-091f-423d-a097-73360c0e9f81"). InnerVolumeSpecName "kube-api-access-qrz4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:47:01 crc kubenswrapper[4812]: I0216 13:47:01.006292 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9be726ee-091f-423d-a097-73360c0e9f81-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9be726ee-091f-423d-a097-73360c0e9f81" (UID: "9be726ee-091f-423d-a097-73360c0e9f81"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:47:01 crc kubenswrapper[4812]: I0216 13:47:01.049995 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9be726ee-091f-423d-a097-73360c0e9f81-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:47:01 crc kubenswrapper[4812]: I0216 13:47:01.050039 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrz4r\" (UniqueName: \"kubernetes.io/projected/9be726ee-091f-423d-a097-73360c0e9f81-kube-api-access-qrz4r\") on node \"crc\" DevicePath \"\"" Feb 16 13:47:01 crc kubenswrapper[4812]: I0216 13:47:01.050056 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9be726ee-091f-423d-a097-73360c0e9f81-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:47:01 crc kubenswrapper[4812]: I0216 13:47:01.530702 4812 generic.go:334] "Generic (PLEG): container finished" podID="9be726ee-091f-423d-a097-73360c0e9f81" containerID="5d34a066a7c78e6581daa21e1c9bdfce73afb6e30f0e8715d176fb5497bb663e" exitCode=0 Feb 16 13:47:01 crc kubenswrapper[4812]: I0216 13:47:01.530787 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6pmzj" Feb 16 13:47:01 crc kubenswrapper[4812]: I0216 13:47:01.530822 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6pmzj" event={"ID":"9be726ee-091f-423d-a097-73360c0e9f81","Type":"ContainerDied","Data":"5d34a066a7c78e6581daa21e1c9bdfce73afb6e30f0e8715d176fb5497bb663e"} Feb 16 13:47:01 crc kubenswrapper[4812]: I0216 13:47:01.531389 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6pmzj" event={"ID":"9be726ee-091f-423d-a097-73360c0e9f81","Type":"ContainerDied","Data":"bf7196a4c7908889b8a33f751146284753df9ac71bfc921f7cac511ed1f65591"} Feb 16 13:47:01 crc kubenswrapper[4812]: I0216 13:47:01.531493 4812 scope.go:117] "RemoveContainer" containerID="5d34a066a7c78e6581daa21e1c9bdfce73afb6e30f0e8715d176fb5497bb663e" Feb 16 13:47:01 crc kubenswrapper[4812]: I0216 13:47:01.559306 4812 scope.go:117] "RemoveContainer" containerID="4c963e81da8361e2d910ed2d49ab6515e95e5aec33ae6b5415a737724ae75d74" Feb 16 13:47:01 crc kubenswrapper[4812]: I0216 13:47:01.572656 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6pmzj"] Feb 16 13:47:01 crc kubenswrapper[4812]: I0216 13:47:01.588706 4812 scope.go:117] "RemoveContainer" containerID="ac7dcaaf0c571e5dee75789e8af21b12f66f86c42df7a96380939e8f33903ef5" Feb 16 13:47:01 crc kubenswrapper[4812]: I0216 13:47:01.597078 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6pmzj"] Feb 16 13:47:01 crc kubenswrapper[4812]: I0216 13:47:01.608935 4812 scope.go:117] "RemoveContainer" containerID="5d34a066a7c78e6581daa21e1c9bdfce73afb6e30f0e8715d176fb5497bb663e" Feb 16 13:47:01 crc kubenswrapper[4812]: E0216 13:47:01.609678 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d34a066a7c78e6581daa21e1c9bdfce73afb6e30f0e8715d176fb5497bb663e\": container with ID starting with 5d34a066a7c78e6581daa21e1c9bdfce73afb6e30f0e8715d176fb5497bb663e not found: ID does not exist" containerID="5d34a066a7c78e6581daa21e1c9bdfce73afb6e30f0e8715d176fb5497bb663e" Feb 16 13:47:01 crc kubenswrapper[4812]: I0216 13:47:01.609710 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d34a066a7c78e6581daa21e1c9bdfce73afb6e30f0e8715d176fb5497bb663e"} err="failed to get container status \"5d34a066a7c78e6581daa21e1c9bdfce73afb6e30f0e8715d176fb5497bb663e\": rpc error: code = NotFound desc = could not find container \"5d34a066a7c78e6581daa21e1c9bdfce73afb6e30f0e8715d176fb5497bb663e\": container with ID starting with 5d34a066a7c78e6581daa21e1c9bdfce73afb6e30f0e8715d176fb5497bb663e not found: ID does not exist" Feb 16 13:47:01 crc kubenswrapper[4812]: I0216 13:47:01.609733 4812 scope.go:117] "RemoveContainer" containerID="4c963e81da8361e2d910ed2d49ab6515e95e5aec33ae6b5415a737724ae75d74" Feb 16 13:47:01 crc kubenswrapper[4812]: E0216 13:47:01.610359 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c963e81da8361e2d910ed2d49ab6515e95e5aec33ae6b5415a737724ae75d74\": container with ID starting with 4c963e81da8361e2d910ed2d49ab6515e95e5aec33ae6b5415a737724ae75d74 not found: ID does not exist" containerID="4c963e81da8361e2d910ed2d49ab6515e95e5aec33ae6b5415a737724ae75d74" Feb 16 13:47:01 crc kubenswrapper[4812]: I0216 13:47:01.610418 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c963e81da8361e2d910ed2d49ab6515e95e5aec33ae6b5415a737724ae75d74"} err="failed to get container status \"4c963e81da8361e2d910ed2d49ab6515e95e5aec33ae6b5415a737724ae75d74\": rpc error: code = NotFound desc = could not find container \"4c963e81da8361e2d910ed2d49ab6515e95e5aec33ae6b5415a737724ae75d74\": container with ID starting with 4c963e81da8361e2d910ed2d49ab6515e95e5aec33ae6b5415a737724ae75d74 not found: ID does not exist" Feb 16 13:47:01 crc kubenswrapper[4812]: I0216 13:47:01.610465 4812 scope.go:117] "RemoveContainer" containerID="ac7dcaaf0c571e5dee75789e8af21b12f66f86c42df7a96380939e8f33903ef5" Feb 16 13:47:01 crc kubenswrapper[4812]: E0216 13:47:01.610767 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac7dcaaf0c571e5dee75789e8af21b12f66f86c42df7a96380939e8f33903ef5\": container with ID starting with ac7dcaaf0c571e5dee75789e8af21b12f66f86c42df7a96380939e8f33903ef5 not found: ID does not exist" containerID="ac7dcaaf0c571e5dee75789e8af21b12f66f86c42df7a96380939e8f33903ef5" Feb 16 13:47:01 crc kubenswrapper[4812]: I0216 13:47:01.610807 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac7dcaaf0c571e5dee75789e8af21b12f66f86c42df7a96380939e8f33903ef5"} err="failed to get container status \"ac7dcaaf0c571e5dee75789e8af21b12f66f86c42df7a96380939e8f33903ef5\": rpc error: code = NotFound desc = could not find container \"ac7dcaaf0c571e5dee75789e8af21b12f66f86c42df7a96380939e8f33903ef5\": container with ID starting with ac7dcaaf0c571e5dee75789e8af21b12f66f86c42df7a96380939e8f33903ef5 not found: ID does not exist" Feb 16 13:47:01 crc kubenswrapper[4812]: I0216 13:47:01.887787 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9be726ee-091f-423d-a097-73360c0e9f81" path="/var/lib/kubelet/pods/9be726ee-091f-423d-a097-73360c0e9f81/volumes" Feb 16 13:47:05 crc kubenswrapper[4812]: I0216 13:47:05.842326 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecallcpm"] Feb 16 13:47:05 crc kubenswrapper[4812]: E0216 13:47:05.843238 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9be726ee-091f-423d-a097-73360c0e9f81" containerName="registry-server" Feb 16 13:47:05 crc kubenswrapper[4812]: I0216 13:47:05.843261 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="9be726ee-091f-423d-a097-73360c0e9f81" containerName="registry-server" Feb 16 13:47:05 crc kubenswrapper[4812]: E0216 13:47:05.843281 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9be726ee-091f-423d-a097-73360c0e9f81" containerName="extract-utilities" Feb 16 13:47:05 crc kubenswrapper[4812]: I0216 13:47:05.843288 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="9be726ee-091f-423d-a097-73360c0e9f81" containerName="extract-utilities" Feb 16 13:47:05 crc kubenswrapper[4812]: E0216 13:47:05.843307 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9be726ee-091f-423d-a097-73360c0e9f81" containerName="extract-content" Feb 16 13:47:05 crc kubenswrapper[4812]: I0216 13:47:05.843317 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="9be726ee-091f-423d-a097-73360c0e9f81" containerName="extract-content" Feb 16 13:47:05 crc kubenswrapper[4812]: I0216 13:47:05.843468 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="9be726ee-091f-423d-a097-73360c0e9f81" containerName="registry-server" Feb 16 13:47:05 crc kubenswrapper[4812]: I0216 13:47:05.844493 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecallcpm" Feb 16 13:47:05 crc kubenswrapper[4812]: I0216 13:47:05.850465 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 13:47:05 crc kubenswrapper[4812]: I0216 13:47:05.850987 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecallcpm"] Feb 16 13:47:05 crc kubenswrapper[4812]: I0216 13:47:05.910760 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3be7ee4e-d1c9-4c45-87b8-0959f910fe9a-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecallcpm\" (UID: \"3be7ee4e-d1c9-4c45-87b8-0959f910fe9a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecallcpm" Feb 16 13:47:05 crc kubenswrapper[4812]: I0216 13:47:05.911095 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtg9m\" (UniqueName: \"kubernetes.io/projected/3be7ee4e-d1c9-4c45-87b8-0959f910fe9a-kube-api-access-vtg9m\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecallcpm\" (UID: \"3be7ee4e-d1c9-4c45-87b8-0959f910fe9a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecallcpm" Feb 16 13:47:05 crc kubenswrapper[4812]: I0216 13:47:05.911198 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3be7ee4e-d1c9-4c45-87b8-0959f910fe9a-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecallcpm\" (UID: \"3be7ee4e-d1c9-4c45-87b8-0959f910fe9a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecallcpm" Feb 16 13:47:06 crc kubenswrapper[4812]: I0216 13:47:06.013674 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtg9m\" (UniqueName: \"kubernetes.io/projected/3be7ee4e-d1c9-4c45-87b8-0959f910fe9a-kube-api-access-vtg9m\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecallcpm\" (UID: \"3be7ee4e-d1c9-4c45-87b8-0959f910fe9a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecallcpm" Feb 16 13:47:06 crc kubenswrapper[4812]: I0216 13:47:06.013776 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3be7ee4e-d1c9-4c45-87b8-0959f910fe9a-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecallcpm\" (UID: \"3be7ee4e-d1c9-4c45-87b8-0959f910fe9a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecallcpm" Feb 16 13:47:06 crc kubenswrapper[4812]: I0216 13:47:06.013880 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3be7ee4e-d1c9-4c45-87b8-0959f910fe9a-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecallcpm\" (UID: \"3be7ee4e-d1c9-4c45-87b8-0959f910fe9a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecallcpm" Feb 16 13:47:06 crc kubenswrapper[4812]: I0216 13:47:06.014582 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3be7ee4e-d1c9-4c45-87b8-0959f910fe9a-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecallcpm\" (UID: \"3be7ee4e-d1c9-4c45-87b8-0959f910fe9a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecallcpm" Feb 16 13:47:06 crc kubenswrapper[4812]: I0216 13:47:06.014699 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3be7ee4e-d1c9-4c45-87b8-0959f910fe9a-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecallcpm\" (UID: \"3be7ee4e-d1c9-4c45-87b8-0959f910fe9a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecallcpm" Feb 16 13:47:06 crc kubenswrapper[4812]: I0216 13:47:06.043300 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtg9m\" (UniqueName: \"kubernetes.io/projected/3be7ee4e-d1c9-4c45-87b8-0959f910fe9a-kube-api-access-vtg9m\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecallcpm\" (UID: \"3be7ee4e-d1c9-4c45-87b8-0959f910fe9a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecallcpm" Feb 16 13:47:06 crc kubenswrapper[4812]: I0216 13:47:06.178607 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecallcpm" Feb 16 13:47:06 crc kubenswrapper[4812]: I0216 13:47:06.406098 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecallcpm"] Feb 16 13:47:06 crc kubenswrapper[4812]: I0216 13:47:06.565507 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecallcpm" event={"ID":"3be7ee4e-d1c9-4c45-87b8-0959f910fe9a","Type":"ContainerStarted","Data":"f40b37fd19d01d16a9681fbcf065ce6babd28eb2c8038c01d2f08bfd9f90807b"} Feb 16 13:47:07 crc kubenswrapper[4812]: I0216 13:47:07.572632 4812 generic.go:334] "Generic (PLEG): container finished" podID="3be7ee4e-d1c9-4c45-87b8-0959f910fe9a" containerID="722b8ad06f79feda9939225643c61e95db3fabf63a682eae1933e693ccb98b54" exitCode=0 Feb 16 13:47:07 crc kubenswrapper[4812]: I0216 13:47:07.572687 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecallcpm" event={"ID":"3be7ee4e-d1c9-4c45-87b8-0959f910fe9a","Type":"ContainerDied","Data":"722b8ad06f79feda9939225643c61e95db3fabf63a682eae1933e693ccb98b54"} Feb 16 13:47:09 crc kubenswrapper[4812]: I0216 13:47:09.583808 4812 generic.go:334] "Generic (PLEG): container finished" podID="3be7ee4e-d1c9-4c45-87b8-0959f910fe9a" containerID="0b257ce1e799e759a8388bc43de57108441fa4e5c5ad049392d7e56ccd1e5a24" exitCode=0 Feb 16 13:47:09 crc kubenswrapper[4812]: I0216 13:47:09.584135 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecallcpm" event={"ID":"3be7ee4e-d1c9-4c45-87b8-0959f910fe9a","Type":"ContainerDied","Data":"0b257ce1e799e759a8388bc43de57108441fa4e5c5ad049392d7e56ccd1e5a24"} Feb 16 13:47:10 crc kubenswrapper[4812]: I0216 13:47:10.592594 4812 generic.go:334] "Generic (PLEG): container finished" podID="3be7ee4e-d1c9-4c45-87b8-0959f910fe9a" containerID="502abbfff61a6dc32e2f86bf3c0b50b5b00e3e23044c4c7a0a690f7ab4088acf" exitCode=0 Feb 16 13:47:10 crc kubenswrapper[4812]: I0216 13:47:10.592723 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecallcpm" event={"ID":"3be7ee4e-d1c9-4c45-87b8-0959f910fe9a","Type":"ContainerDied","Data":"502abbfff61a6dc32e2f86bf3c0b50b5b00e3e23044c4c7a0a690f7ab4088acf"} Feb 16 13:47:11 crc kubenswrapper[4812]: I0216 13:47:11.847532 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecallcpm" Feb 16 13:47:11 crc kubenswrapper[4812]: I0216 13:47:11.896773 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtg9m\" (UniqueName: \"kubernetes.io/projected/3be7ee4e-d1c9-4c45-87b8-0959f910fe9a-kube-api-access-vtg9m\") pod \"3be7ee4e-d1c9-4c45-87b8-0959f910fe9a\" (UID: \"3be7ee4e-d1c9-4c45-87b8-0959f910fe9a\") " Feb 16 13:47:11 crc kubenswrapper[4812]: I0216 13:47:11.896828 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3be7ee4e-d1c9-4c45-87b8-0959f910fe9a-util\") pod \"3be7ee4e-d1c9-4c45-87b8-0959f910fe9a\" (UID: \"3be7ee4e-d1c9-4c45-87b8-0959f910fe9a\") " Feb 16 13:47:11 crc kubenswrapper[4812]: I0216 13:47:11.896892 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3be7ee4e-d1c9-4c45-87b8-0959f910fe9a-bundle\") pod \"3be7ee4e-d1c9-4c45-87b8-0959f910fe9a\" (UID: \"3be7ee4e-d1c9-4c45-87b8-0959f910fe9a\") " Feb 16 13:47:11 crc kubenswrapper[4812]: I0216 13:47:11.897717 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3be7ee4e-d1c9-4c45-87b8-0959f910fe9a-bundle" (OuterVolumeSpecName: "bundle") pod "3be7ee4e-d1c9-4c45-87b8-0959f910fe9a" (UID: "3be7ee4e-d1c9-4c45-87b8-0959f910fe9a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:47:11 crc kubenswrapper[4812]: I0216 13:47:11.907946 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3be7ee4e-d1c9-4c45-87b8-0959f910fe9a-kube-api-access-vtg9m" (OuterVolumeSpecName: "kube-api-access-vtg9m") pod "3be7ee4e-d1c9-4c45-87b8-0959f910fe9a" (UID: "3be7ee4e-d1c9-4c45-87b8-0959f910fe9a"). InnerVolumeSpecName "kube-api-access-vtg9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:47:11 crc kubenswrapper[4812]: I0216 13:47:11.912572 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3be7ee4e-d1c9-4c45-87b8-0959f910fe9a-util" (OuterVolumeSpecName: "util") pod "3be7ee4e-d1c9-4c45-87b8-0959f910fe9a" (UID: "3be7ee4e-d1c9-4c45-87b8-0959f910fe9a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:47:11 crc kubenswrapper[4812]: I0216 13:47:11.998299 4812 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3be7ee4e-d1c9-4c45-87b8-0959f910fe9a-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:47:11 crc kubenswrapper[4812]: I0216 13:47:11.998825 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtg9m\" (UniqueName: \"kubernetes.io/projected/3be7ee4e-d1c9-4c45-87b8-0959f910fe9a-kube-api-access-vtg9m\") on node \"crc\" DevicePath \"\"" Feb 16 13:47:11 crc kubenswrapper[4812]: I0216 13:47:11.998859 4812 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3be7ee4e-d1c9-4c45-87b8-0959f910fe9a-util\") on node \"crc\" DevicePath \"\"" Feb 16 13:47:12 crc kubenswrapper[4812]: I0216 13:47:12.606126 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecallcpm" event={"ID":"3be7ee4e-d1c9-4c45-87b8-0959f910fe9a","Type":"ContainerDied","Data":"f40b37fd19d01d16a9681fbcf065ce6babd28eb2c8038c01d2f08bfd9f90807b"} Feb 16 13:47:12 crc kubenswrapper[4812]: I0216 13:47:12.606170 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f40b37fd19d01d16a9681fbcf065ce6babd28eb2c8038c01d2f08bfd9f90807b" Feb 16 13:47:12 crc kubenswrapper[4812]: I0216 13:47:12.606235 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecallcpm" Feb 16 13:47:14 crc kubenswrapper[4812]: I0216 13:47:14.549865 4812 patch_prober.go:28] interesting pod/machine-config-daemon-c6mn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:47:14 crc kubenswrapper[4812]: I0216 13:47:14.550278 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:47:15 crc kubenswrapper[4812]: I0216 13:47:15.330327 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-g42s8"] Feb 16 13:47:15 crc kubenswrapper[4812]: E0216 13:47:15.330637 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3be7ee4e-d1c9-4c45-87b8-0959f910fe9a" containerName="util" Feb 16 13:47:15 crc kubenswrapper[4812]: I0216 13:47:15.330653 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="3be7ee4e-d1c9-4c45-87b8-0959f910fe9a" containerName="util" Feb 16 13:47:15 crc kubenswrapper[4812]: E0216 13:47:15.330670 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3be7ee4e-d1c9-4c45-87b8-0959f910fe9a" containerName="pull" Feb 16 13:47:15 crc kubenswrapper[4812]: I0216 13:47:15.330677 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="3be7ee4e-d1c9-4c45-87b8-0959f910fe9a" containerName="pull" Feb 16 13:47:15 crc kubenswrapper[4812]: E0216 13:47:15.330688 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3be7ee4e-d1c9-4c45-87b8-0959f910fe9a" containerName="extract" Feb 16 13:47:15 crc kubenswrapper[4812]: I0216 13:47:15.330696 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="3be7ee4e-d1c9-4c45-87b8-0959f910fe9a" containerName="extract" Feb 16 13:47:15 crc kubenswrapper[4812]: I0216 13:47:15.330817 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="3be7ee4e-d1c9-4c45-87b8-0959f910fe9a" containerName="extract" Feb 16 13:47:15 crc kubenswrapper[4812]: I0216 13:47:15.331298 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-g42s8" Feb 16 13:47:15 crc kubenswrapper[4812]: I0216 13:47:15.338295 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-wxkfl" Feb 16 13:47:15 crc kubenswrapper[4812]: I0216 13:47:15.339543 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 16 13:47:15 crc kubenswrapper[4812]: I0216 13:47:15.339545 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 16 13:47:15 crc kubenswrapper[4812]: I0216 13:47:15.363522 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-g42s8"] Feb 16 13:47:15 crc kubenswrapper[4812]: I0216 13:47:15.444158 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvmq6\" (UniqueName: \"kubernetes.io/projected/acdc5133-d5db-443d-b935-f284f767ac99-kube-api-access-vvmq6\") pod \"nmstate-operator-694c9596b7-g42s8\" (UID: \"acdc5133-d5db-443d-b935-f284f767ac99\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-g42s8" Feb 16 13:47:15 crc kubenswrapper[4812]: I0216 13:47:15.545131 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvmq6\" (UniqueName: \"kubernetes.io/projected/acdc5133-d5db-443d-b935-f284f767ac99-kube-api-access-vvmq6\") pod \"nmstate-operator-694c9596b7-g42s8\" (UID: \"acdc5133-d5db-443d-b935-f284f767ac99\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-g42s8" Feb 16 13:47:15 crc kubenswrapper[4812]: I0216 13:47:15.574960 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvmq6\" (UniqueName: \"kubernetes.io/projected/acdc5133-d5db-443d-b935-f284f767ac99-kube-api-access-vvmq6\") pod \"nmstate-operator-694c9596b7-g42s8\" (UID: \"acdc5133-d5db-443d-b935-f284f767ac99\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-g42s8" Feb 16 13:47:15 crc kubenswrapper[4812]: I0216 13:47:15.651133 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-g42s8" Feb 16 13:47:15 crc kubenswrapper[4812]: I0216 13:47:15.899452 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-g42s8"] Feb 16 13:47:16 crc kubenswrapper[4812]: I0216 13:47:16.626494 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-g42s8" event={"ID":"acdc5133-d5db-443d-b935-f284f767ac99","Type":"ContainerStarted","Data":"6ceffb8d7ee7b3b86cc4b1b6f17866e507dd19a0c5f742812e5dd89b8492d2f0"} Feb 16 13:47:18 crc kubenswrapper[4812]: I0216 13:47:18.638819 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-g42s8" event={"ID":"acdc5133-d5db-443d-b935-f284f767ac99","Type":"ContainerStarted","Data":"5b2581faf18077814643e15db02bbb6aa9995143df626bafd9f62c3fe2a2d9cf"} Feb 16 13:47:18 crc kubenswrapper[4812]: I0216 13:47:18.660611 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-g42s8" podStartSLOduration=1.800658205 podStartE2EDuration="3.660587663s" podCreationTimestamp="2026-02-16 13:47:15 +0000 UTC" firstStartedPulling="2026-02-16 13:47:15.908169276 +0000 UTC m=+924.972499977" lastFinishedPulling="2026-02-16 13:47:17.768098734 +0000 UTC m=+926.832429435" observedRunningTime="2026-02-16 13:47:18.655151239 +0000 UTC m=+927.719481960" watchObservedRunningTime="2026-02-16 13:47:18.660587663 +0000 UTC m=+927.724918384" Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.543997 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-8zchp"] Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.545468 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-8zchp" Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.547328 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-kq7g7" Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.552951 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-8zchp"] Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.582412 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-8kkws"] Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.583315 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-8kkws" Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.586729 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.593209 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b78s2\" (UniqueName: \"kubernetes.io/projected/de488e97-05f3-4b9c-abd2-2ae259997bc1-kube-api-access-b78s2\") pod \"nmstate-metrics-58c85c668d-8zchp\" (UID: \"de488e97-05f3-4b9c-abd2-2ae259997bc1\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-8zchp" Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.594574 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-5dtvn"] Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.595563 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-5dtvn" Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.610347 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-8kkws"] Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.694898 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d5f47728-5a50-45df-8379-cc1e7779f00c-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-8kkws\" (UID: \"d5f47728-5a50-45df-8379-cc1e7779f00c\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-8kkws" Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.694947 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/b68968e3-1037-494a-8c4b-f6f4ae6c3e02-dbus-socket\") pod \"nmstate-handler-5dtvn\" (UID: \"b68968e3-1037-494a-8c4b-f6f4ae6c3e02\") " pod="openshift-nmstate/nmstate-handler-5dtvn" Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.695077 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b78s2\" (UniqueName: \"kubernetes.io/projected/de488e97-05f3-4b9c-abd2-2ae259997bc1-kube-api-access-b78s2\") pod \"nmstate-metrics-58c85c668d-8zchp\" (UID: \"de488e97-05f3-4b9c-abd2-2ae259997bc1\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-8zchp" Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.695191 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/b68968e3-1037-494a-8c4b-f6f4ae6c3e02-nmstate-lock\") pod \"nmstate-handler-5dtvn\" (UID: \"b68968e3-1037-494a-8c4b-f6f4ae6c3e02\") " pod="openshift-nmstate/nmstate-handler-5dtvn" Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.695249 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/b68968e3-1037-494a-8c4b-f6f4ae6c3e02-ovs-socket\") pod \"nmstate-handler-5dtvn\" (UID: \"b68968e3-1037-494a-8c4b-f6f4ae6c3e02\") " pod="openshift-nmstate/nmstate-handler-5dtvn" Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.695313 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2sh4\" (UniqueName: \"kubernetes.io/projected/b68968e3-1037-494a-8c4b-f6f4ae6c3e02-kube-api-access-l2sh4\") pod \"nmstate-handler-5dtvn\" (UID: \"b68968e3-1037-494a-8c4b-f6f4ae6c3e02\") " pod="openshift-nmstate/nmstate-handler-5dtvn" Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.695437 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq6p8\" (UniqueName: \"kubernetes.io/projected/d5f47728-5a50-45df-8379-cc1e7779f00c-kube-api-access-mq6p8\") pod \"nmstate-webhook-866bcb46dc-8kkws\" (UID: \"d5f47728-5a50-45df-8379-cc1e7779f00c\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-8kkws" Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.718240 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-jh9xp"] Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.719433 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-jh9xp" Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.720989 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b78s2\" (UniqueName: \"kubernetes.io/projected/de488e97-05f3-4b9c-abd2-2ae259997bc1-kube-api-access-b78s2\") pod \"nmstate-metrics-58c85c668d-8zchp\" (UID: \"de488e97-05f3-4b9c-abd2-2ae259997bc1\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-8zchp" Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.722382 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.722708 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-jz89f" Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.723362 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.730892 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-jh9xp"] Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.796290 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq6p8\" (UniqueName: \"kubernetes.io/projected/d5f47728-5a50-45df-8379-cc1e7779f00c-kube-api-access-mq6p8\") pod \"nmstate-webhook-866bcb46dc-8kkws\" (UID: \"d5f47728-5a50-45df-8379-cc1e7779f00c\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-8kkws" Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.796347 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2833a171-e8b3-4a2e-99bd-28b4724d3123-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-jh9xp\" (UID: \"2833a171-e8b3-4a2e-99bd-28b4724d3123\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-jh9xp" Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.796387 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d5f47728-5a50-45df-8379-cc1e7779f00c-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-8kkws\" (UID: \"d5f47728-5a50-45df-8379-cc1e7779f00c\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-8kkws" Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.796409 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/b68968e3-1037-494a-8c4b-f6f4ae6c3e02-dbus-socket\") pod \"nmstate-handler-5dtvn\" (UID: \"b68968e3-1037-494a-8c4b-f6f4ae6c3e02\") " pod="openshift-nmstate/nmstate-handler-5dtvn" Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.796436 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgzwx\" (UniqueName: \"kubernetes.io/projected/2833a171-e8b3-4a2e-99bd-28b4724d3123-kube-api-access-lgzwx\") pod \"nmstate-console-plugin-5c78fc5d65-jh9xp\" (UID: \"2833a171-e8b3-4a2e-99bd-28b4724d3123\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-jh9xp" Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.796485 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/b68968e3-1037-494a-8c4b-f6f4ae6c3e02-nmstate-lock\") pod \"nmstate-handler-5dtvn\" (UID: \"b68968e3-1037-494a-8c4b-f6f4ae6c3e02\") " pod="openshift-nmstate/nmstate-handler-5dtvn" Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.796512 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/b68968e3-1037-494a-8c4b-f6f4ae6c3e02-ovs-socket\") pod \"nmstate-handler-5dtvn\" (UID: \"b68968e3-1037-494a-8c4b-f6f4ae6c3e02\") " pod="openshift-nmstate/nmstate-handler-5dtvn" Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.796555 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2sh4\" (UniqueName: \"kubernetes.io/projected/b68968e3-1037-494a-8c4b-f6f4ae6c3e02-kube-api-access-l2sh4\") pod \"nmstate-handler-5dtvn\" (UID: \"b68968e3-1037-494a-8c4b-f6f4ae6c3e02\") " pod="openshift-nmstate/nmstate-handler-5dtvn" Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.796577 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2833a171-e8b3-4a2e-99bd-28b4724d3123-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-jh9xp\" (UID: \"2833a171-e8b3-4a2e-99bd-28b4724d3123\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-jh9xp" Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.796653 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/b68968e3-1037-494a-8c4b-f6f4ae6c3e02-nmstate-lock\") pod \"nmstate-handler-5dtvn\" (UID: \"b68968e3-1037-494a-8c4b-f6f4ae6c3e02\") " pod="openshift-nmstate/nmstate-handler-5dtvn" Feb 16 13:47:19 crc kubenswrapper[4812]: E0216 13:47:19.796682 4812 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.796690 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/b68968e3-1037-494a-8c4b-f6f4ae6c3e02-ovs-socket\") pod \"nmstate-handler-5dtvn\" (UID: \"b68968e3-1037-494a-8c4b-f6f4ae6c3e02\") " pod="openshift-nmstate/nmstate-handler-5dtvn" Feb 16 13:47:19 crc kubenswrapper[4812]: E0216 13:47:19.796720 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5f47728-5a50-45df-8379-cc1e7779f00c-tls-key-pair podName:d5f47728-5a50-45df-8379-cc1e7779f00c nodeName:}" failed. No retries permitted until 2026-02-16 13:47:20.29670577 +0000 UTC m=+929.361036471 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/d5f47728-5a50-45df-8379-cc1e7779f00c-tls-key-pair") pod "nmstate-webhook-866bcb46dc-8kkws" (UID: "d5f47728-5a50-45df-8379-cc1e7779f00c") : secret "openshift-nmstate-webhook" not found Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.796883 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/b68968e3-1037-494a-8c4b-f6f4ae6c3e02-dbus-socket\") pod \"nmstate-handler-5dtvn\" (UID: \"b68968e3-1037-494a-8c4b-f6f4ae6c3e02\") " pod="openshift-nmstate/nmstate-handler-5dtvn" Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.813146 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2sh4\" (UniqueName: \"kubernetes.io/projected/b68968e3-1037-494a-8c4b-f6f4ae6c3e02-kube-api-access-l2sh4\") pod \"nmstate-handler-5dtvn\" (UID: \"b68968e3-1037-494a-8c4b-f6f4ae6c3e02\") " pod="openshift-nmstate/nmstate-handler-5dtvn" Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.817737 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq6p8\" (UniqueName: \"kubernetes.io/projected/d5f47728-5a50-45df-8379-cc1e7779f00c-kube-api-access-mq6p8\") pod \"nmstate-webhook-866bcb46dc-8kkws\" (UID: \"d5f47728-5a50-45df-8379-cc1e7779f00c\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-8kkws" Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.867626 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-8zchp" Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.898953 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2833a171-e8b3-4a2e-99bd-28b4724d3123-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-jh9xp\" (UID: \"2833a171-e8b3-4a2e-99bd-28b4724d3123\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-jh9xp" Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.899064 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2833a171-e8b3-4a2e-99bd-28b4724d3123-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-jh9xp\" (UID: \"2833a171-e8b3-4a2e-99bd-28b4724d3123\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-jh9xp" Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.899185 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgzwx\" (UniqueName: \"kubernetes.io/projected/2833a171-e8b3-4a2e-99bd-28b4724d3123-kube-api-access-lgzwx\") pod \"nmstate-console-plugin-5c78fc5d65-jh9xp\" (UID: \"2833a171-e8b3-4a2e-99bd-28b4724d3123\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-jh9xp" Feb 16 13:47:19 crc kubenswrapper[4812]: E0216 13:47:19.900728 4812 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.900794 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2833a171-e8b3-4a2e-99bd-28b4724d3123-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-jh9xp\" (UID: \"2833a171-e8b3-4a2e-99bd-28b4724d3123\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-jh9xp" Feb 16 13:47:19 crc kubenswrapper[4812]: E0216 13:47:19.900815 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2833a171-e8b3-4a2e-99bd-28b4724d3123-plugin-serving-cert podName:2833a171-e8b3-4a2e-99bd-28b4724d3123 nodeName:}" failed. No retries permitted until 2026-02-16 13:47:20.4007933 +0000 UTC m=+929.465124001 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/2833a171-e8b3-4a2e-99bd-28b4724d3123-plugin-serving-cert") pod "nmstate-console-plugin-5c78fc5d65-jh9xp" (UID: "2833a171-e8b3-4a2e-99bd-28b4724d3123") : secret "plugin-serving-cert" not found Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.922601 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgzwx\" (UniqueName: \"kubernetes.io/projected/2833a171-e8b3-4a2e-99bd-28b4724d3123-kube-api-access-lgzwx\") pod \"nmstate-console-plugin-5c78fc5d65-jh9xp\" (UID: \"2833a171-e8b3-4a2e-99bd-28b4724d3123\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-jh9xp" Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.927037 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-5dtvn" Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.934575 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6868d88bbd-rwbhn"] Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.935522 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6868d88bbd-rwbhn" Feb 16 13:47:19 crc kubenswrapper[4812]: I0216 13:47:19.946029 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6868d88bbd-rwbhn"] Feb 16 13:47:20 crc kubenswrapper[4812]: I0216 13:47:20.000155 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6460274e-e90f-403b-af3a-86698c022cce-service-ca\") pod \"console-6868d88bbd-rwbhn\" (UID: \"6460274e-e90f-403b-af3a-86698c022cce\") " pod="openshift-console/console-6868d88bbd-rwbhn" Feb 16 13:47:20 crc kubenswrapper[4812]: I0216 13:47:20.000217 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6460274e-e90f-403b-af3a-86698c022cce-trusted-ca-bundle\") pod \"console-6868d88bbd-rwbhn\" (UID: \"6460274e-e90f-403b-af3a-86698c022cce\") " pod="openshift-console/console-6868d88bbd-rwbhn" Feb 16 13:47:20 crc kubenswrapper[4812]: I0216 13:47:20.000264 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9z78\" (UniqueName: \"kubernetes.io/projected/6460274e-e90f-403b-af3a-86698c022cce-kube-api-access-z9z78\") pod \"console-6868d88bbd-rwbhn\" (UID: \"6460274e-e90f-403b-af3a-86698c022cce\") " pod="openshift-console/console-6868d88bbd-rwbhn" Feb 16 13:47:20 crc kubenswrapper[4812]: I0216 13:47:20.000281 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6460274e-e90f-403b-af3a-86698c022cce-console-oauth-config\") pod \"console-6868d88bbd-rwbhn\" (UID: \"6460274e-e90f-403b-af3a-86698c022cce\") " pod="openshift-console/console-6868d88bbd-rwbhn" Feb 16 13:47:20 crc kubenswrapper[4812]: I0216 13:47:20.000303 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6460274e-e90f-403b-af3a-86698c022cce-console-config\") pod \"console-6868d88bbd-rwbhn\" (UID: \"6460274e-e90f-403b-af3a-86698c022cce\") " pod="openshift-console/console-6868d88bbd-rwbhn" Feb 16 13:47:20 crc kubenswrapper[4812]: I0216 13:47:20.000329 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6460274e-e90f-403b-af3a-86698c022cce-oauth-serving-cert\") pod \"console-6868d88bbd-rwbhn\" (UID: \"6460274e-e90f-403b-af3a-86698c022cce\") " pod="openshift-console/console-6868d88bbd-rwbhn" Feb 16 13:47:20 crc kubenswrapper[4812]: I0216 13:47:20.000382 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6460274e-e90f-403b-af3a-86698c022cce-console-serving-cert\") pod \"console-6868d88bbd-rwbhn\" (UID: \"6460274e-e90f-403b-af3a-86698c022cce\") " pod="openshift-console/console-6868d88bbd-rwbhn" Feb 16 13:47:20 crc kubenswrapper[4812]: I0216 13:47:20.105024 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6460274e-e90f-403b-af3a-86698c022cce-console-serving-cert\") pod \"console-6868d88bbd-rwbhn\" (UID: \"6460274e-e90f-403b-af3a-86698c022cce\") " pod="openshift-console/console-6868d88bbd-rwbhn" Feb 16 13:47:20 crc kubenswrapper[4812]: I0216 13:47:20.105320 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6460274e-e90f-403b-af3a-86698c022cce-service-ca\") pod \"console-6868d88bbd-rwbhn\" (UID: \"6460274e-e90f-403b-af3a-86698c022cce\") " pod="openshift-console/console-6868d88bbd-rwbhn" Feb 16 13:47:20 crc kubenswrapper[4812]: I0216 13:47:20.105350 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6460274e-e90f-403b-af3a-86698c022cce-trusted-ca-bundle\") pod \"console-6868d88bbd-rwbhn\" (UID: \"6460274e-e90f-403b-af3a-86698c022cce\") " pod="openshift-console/console-6868d88bbd-rwbhn" Feb 16 13:47:20 crc kubenswrapper[4812]: I0216 13:47:20.105382 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9z78\" (UniqueName: \"kubernetes.io/projected/6460274e-e90f-403b-af3a-86698c022cce-kube-api-access-z9z78\") pod \"console-6868d88bbd-rwbhn\" (UID: \"6460274e-e90f-403b-af3a-86698c022cce\") " pod="openshift-console/console-6868d88bbd-rwbhn" Feb 16 13:47:20 crc kubenswrapper[4812]: I0216 13:47:20.105398 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6460274e-e90f-403b-af3a-86698c022cce-console-oauth-config\") pod \"console-6868d88bbd-rwbhn\" (UID: \"6460274e-e90f-403b-af3a-86698c022cce\") " pod="openshift-console/console-6868d88bbd-rwbhn" Feb 16 13:47:20 crc kubenswrapper[4812]: I0216 13:47:20.105419 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6460274e-e90f-403b-af3a-86698c022cce-console-config\") pod \"console-6868d88bbd-rwbhn\" (UID: \"6460274e-e90f-403b-af3a-86698c022cce\") " pod="openshift-console/console-6868d88bbd-rwbhn" Feb 16 13:47:20 crc kubenswrapper[4812]: I0216 13:47:20.105469 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6460274e-e90f-403b-af3a-86698c022cce-oauth-serving-cert\") pod \"console-6868d88bbd-rwbhn\" (UID: \"6460274e-e90f-403b-af3a-86698c022cce\") " pod="openshift-console/console-6868d88bbd-rwbhn" Feb 16 13:47:20 crc kubenswrapper[4812]: I0216 13:47:20.106688 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6460274e-e90f-403b-af3a-86698c022cce-service-ca\") pod \"console-6868d88bbd-rwbhn\" (UID: \"6460274e-e90f-403b-af3a-86698c022cce\") " pod="openshift-console/console-6868d88bbd-rwbhn" Feb 16 13:47:20 crc kubenswrapper[4812]: I0216 13:47:20.109993 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6460274e-e90f-403b-af3a-86698c022cce-console-config\") pod \"console-6868d88bbd-rwbhn\" (UID: \"6460274e-e90f-403b-af3a-86698c022cce\") " pod="openshift-console/console-6868d88bbd-rwbhn" Feb 16 13:47:20 crc kubenswrapper[4812]: I0216 13:47:20.110019 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6460274e-e90f-403b-af3a-86698c022cce-oauth-serving-cert\") pod \"console-6868d88bbd-rwbhn\" (UID: \"6460274e-e90f-403b-af3a-86698c022cce\") " pod="openshift-console/console-6868d88bbd-rwbhn" Feb 16 13:47:20 crc kubenswrapper[4812]: I0216 13:47:20.110206 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6460274e-e90f-403b-af3a-86698c022cce-trusted-ca-bundle\") pod \"console-6868d88bbd-rwbhn\" (UID: \"6460274e-e90f-403b-af3a-86698c022cce\") " pod="openshift-console/console-6868d88bbd-rwbhn" Feb 16 13:47:20 crc kubenswrapper[4812]: I0216 13:47:20.110786 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6460274e-e90f-403b-af3a-86698c022cce-console-serving-cert\") pod \"console-6868d88bbd-rwbhn\" (UID: \"6460274e-e90f-403b-af3a-86698c022cce\") " pod="openshift-console/console-6868d88bbd-rwbhn" Feb 16 13:47:20 crc kubenswrapper[4812]: I0216 13:47:20.110956 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6460274e-e90f-403b-af3a-86698c022cce-console-oauth-config\") pod \"console-6868d88bbd-rwbhn\" (UID: \"6460274e-e90f-403b-af3a-86698c022cce\") " pod="openshift-console/console-6868d88bbd-rwbhn" Feb 16 13:47:20 crc kubenswrapper[4812]: I0216 13:47:20.121032 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9z78\" (UniqueName: \"kubernetes.io/projected/6460274e-e90f-403b-af3a-86698c022cce-kube-api-access-z9z78\") pod \"console-6868d88bbd-rwbhn\" (UID: \"6460274e-e90f-403b-af3a-86698c022cce\") " pod="openshift-console/console-6868d88bbd-rwbhn" Feb 16 13:47:20 crc kubenswrapper[4812]: I0216 13:47:20.267315 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6868d88bbd-rwbhn" Feb 16 13:47:20 crc kubenswrapper[4812]: I0216 13:47:20.308103 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d5f47728-5a50-45df-8379-cc1e7779f00c-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-8kkws\" (UID: \"d5f47728-5a50-45df-8379-cc1e7779f00c\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-8kkws" Feb 16 13:47:20 crc kubenswrapper[4812]: I0216 13:47:20.311570 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d5f47728-5a50-45df-8379-cc1e7779f00c-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-8kkws\" (UID: \"d5f47728-5a50-45df-8379-cc1e7779f00c\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-8kkws" Feb 16 13:47:20 crc kubenswrapper[4812]: I0216 13:47:20.385030 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-8zchp"] Feb 16 13:47:20 crc kubenswrapper[4812]: W0216 13:47:20.397743 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde488e97_05f3_4b9c_abd2_2ae259997bc1.slice/crio-76ff5b9a0e5076394bbda1ab4826f5e501b29a41e39ff70382191ff4e9ff741d WatchSource:0}: Error finding container 76ff5b9a0e5076394bbda1ab4826f5e501b29a41e39ff70382191ff4e9ff741d: Status 404 returned error can't find the container with id 76ff5b9a0e5076394bbda1ab4826f5e501b29a41e39ff70382191ff4e9ff741d Feb 16 13:47:20 crc kubenswrapper[4812]: I0216 13:47:20.409355 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2833a171-e8b3-4a2e-99bd-28b4724d3123-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-jh9xp\" (UID: \"2833a171-e8b3-4a2e-99bd-28b4724d3123\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-jh9xp" Feb 16 13:47:20 crc kubenswrapper[4812]: I0216 13:47:20.413047 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2833a171-e8b3-4a2e-99bd-28b4724d3123-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-jh9xp\" (UID: \"2833a171-e8b3-4a2e-99bd-28b4724d3123\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-jh9xp" Feb 16 13:47:20 crc kubenswrapper[4812]: I0216 13:47:20.505378 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-8kkws" Feb 16 13:47:20 crc kubenswrapper[4812]: I0216 13:47:20.668362 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-8zchp" event={"ID":"de488e97-05f3-4b9c-abd2-2ae259997bc1","Type":"ContainerStarted","Data":"76ff5b9a0e5076394bbda1ab4826f5e501b29a41e39ff70382191ff4e9ff741d"} Feb 16 13:47:20 crc kubenswrapper[4812]: I0216 13:47:20.669616 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-5dtvn" event={"ID":"b68968e3-1037-494a-8c4b-f6f4ae6c3e02","Type":"ContainerStarted","Data":"6f996fe3c924b7384ba2a20c7481f5ae99b13b79b092b45de1093d0ba422c1c2"} Feb 16 13:47:20 crc kubenswrapper[4812]: I0216 13:47:20.669958 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-jh9xp" Feb 16 13:47:20 crc kubenswrapper[4812]: I0216 13:47:20.702124 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6868d88bbd-rwbhn"] Feb 16 13:47:20 crc kubenswrapper[4812]: W0216 13:47:20.710733 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6460274e_e90f_403b_af3a_86698c022cce.slice/crio-b6c1b173f6f74452a04cf35cbb408942bd97494d2e3472693960fa67c532c3f7 WatchSource:0}: Error finding container b6c1b173f6f74452a04cf35cbb408942bd97494d2e3472693960fa67c532c3f7: Status 404 returned error can't find the container with id b6c1b173f6f74452a04cf35cbb408942bd97494d2e3472693960fa67c532c3f7 Feb 16 13:47:20 crc kubenswrapper[4812]: I0216 13:47:20.913809 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-jh9xp"] Feb 16 13:47:21 crc kubenswrapper[4812]: I0216 13:47:21.041173 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-8kkws"] Feb 16 13:47:21 crc kubenswrapper[4812]: I0216 13:47:21.684862 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-8kkws" event={"ID":"d5f47728-5a50-45df-8379-cc1e7779f00c","Type":"ContainerStarted","Data":"b93545a35617a5f68e65a80d73a8251456fc30ed9129a8f55f4b6aa4402d742b"} Feb 16 13:47:21 crc kubenswrapper[4812]: I0216 13:47:21.687198 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6868d88bbd-rwbhn" event={"ID":"6460274e-e90f-403b-af3a-86698c022cce","Type":"ContainerStarted","Data":"9bdc2a007699d0b0a374f43c7ac0295f73365e37a3c0994a0fee75d46a0287d8"} Feb 16 13:47:21 crc kubenswrapper[4812]: I0216 13:47:21.687234 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6868d88bbd-rwbhn" event={"ID":"6460274e-e90f-403b-af3a-86698c022cce","Type":"ContainerStarted","Data":"b6c1b173f6f74452a04cf35cbb408942bd97494d2e3472693960fa67c532c3f7"} Feb 16 13:47:21 crc kubenswrapper[4812]: I0216 13:47:21.688573 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-jh9xp" event={"ID":"2833a171-e8b3-4a2e-99bd-28b4724d3123","Type":"ContainerStarted","Data":"09196fbec8be49412a7648758e6f07f24127e5996385a60108d3fb6c494c0a16"} Feb 16 13:47:21 crc kubenswrapper[4812]: I0216 13:47:21.902839 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6868d88bbd-rwbhn" podStartSLOduration=2.902822759 podStartE2EDuration="2.902822759s" podCreationTimestamp="2026-02-16 13:47:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:47:21.707983208 +0000 UTC m=+930.772313909" watchObservedRunningTime="2026-02-16 13:47:21.902822759 +0000 UTC m=+930.967153460" Feb 16 13:47:23 crc kubenswrapper[4812]: I0216 13:47:23.703022 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-8kkws" event={"ID":"d5f47728-5a50-45df-8379-cc1e7779f00c","Type":"ContainerStarted","Data":"5d5679091cff5b31ebc3427db7cded1cc1f0d6a216d608ab616f4d150ec92281"} Feb 16 13:47:23 crc kubenswrapper[4812]: I0216 13:47:23.703621 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-8kkws" Feb 16 13:47:23 crc kubenswrapper[4812]: I0216 13:47:23.705495 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-5dtvn" event={"ID":"b68968e3-1037-494a-8c4b-f6f4ae6c3e02","Type":"ContainerStarted","Data":"5fc3d549d73a754bcd43b679a17023271cbcb2b45028758fcac000bf736167d3"} Feb 16 13:47:23 crc kubenswrapper[4812]: I0216 13:47:23.705580 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-5dtvn" Feb 16 13:47:23 crc kubenswrapper[4812]: I0216 13:47:23.707133 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-8zchp" event={"ID":"de488e97-05f3-4b9c-abd2-2ae259997bc1","Type":"ContainerStarted","Data":"19a96196b6b531cf0e8bad43b134c5205354d5b04e2068cbdea351bf1c0a9e12"} Feb 16 13:47:23 crc kubenswrapper[4812]: I0216 13:47:23.709323 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-jh9xp" event={"ID":"2833a171-e8b3-4a2e-99bd-28b4724d3123","Type":"ContainerStarted","Data":"0d6eb30a6d8405a4ccb8ef9c0e4bb91f3b6677d573c14e3029612e3eccac0c21"} Feb 16 13:47:23 crc kubenswrapper[4812]: I0216 13:47:23.729552 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-8kkws" podStartSLOduration=2.669439192 podStartE2EDuration="4.729519863s" podCreationTimestamp="2026-02-16 13:47:19 +0000 UTC" firstStartedPulling="2026-02-16 13:47:21.047863587 +0000 UTC m=+930.112194288" lastFinishedPulling="2026-02-16 13:47:23.107944258 +0000 UTC m=+932.172274959" observedRunningTime="2026-02-16 13:47:23.721981149 +0000 UTC m=+932.786311870" watchObservedRunningTime="2026-02-16 13:47:23.729519863 +0000 UTC m=+932.793850564" Feb 16 13:47:23 crc kubenswrapper[4812]: I0216 13:47:23.755805 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-jh9xp" podStartSLOduration=2.574521544 podStartE2EDuration="4.75578692s" podCreationTimestamp="2026-02-16 13:47:19 +0000 UTC" firstStartedPulling="2026-02-16 13:47:20.924791088 +0000 UTC m=+929.989121789" lastFinishedPulling="2026-02-16 13:47:23.106056464 +0000 UTC m=+932.170387165" observedRunningTime="2026-02-16 13:47:23.750795628 +0000 UTC m=+932.815126329" watchObservedRunningTime="2026-02-16 13:47:23.75578692 +0000 UTC m=+932.820117621" Feb 16 13:47:23 crc kubenswrapper[4812]: I0216 13:47:23.781926 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-5dtvn" podStartSLOduration=1.676996382 podStartE2EDuration="4.781910783s" podCreationTimestamp="2026-02-16 13:47:19 +0000 UTC" firstStartedPulling="2026-02-16 13:47:20.001246676 +0000 UTC m=+929.065577377" lastFinishedPulling="2026-02-16 13:47:23.106161077 +0000 UTC m=+932.170491778" observedRunningTime="2026-02-16 13:47:23.779917246 +0000 UTC m=+932.844247957" watchObservedRunningTime="2026-02-16 13:47:23.781910783 +0000 UTC m=+932.846241484" Feb 16 13:47:26 crc kubenswrapper[4812]: I0216 13:47:26.973501 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-8zchp" event={"ID":"de488e97-05f3-4b9c-abd2-2ae259997bc1","Type":"ContainerStarted","Data":"9607f36008902c71513c0ea06111015bd8f7e98366ecb3e79964ac745e9f6e12"} Feb 16 13:47:26 crc kubenswrapper[4812]: I0216 13:47:26.996207 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-8zchp" podStartSLOduration=1.911068419 podStartE2EDuration="7.996187254s" podCreationTimestamp="2026-02-16 13:47:19 +0000 UTC" firstStartedPulling="2026-02-16 13:47:20.400181111 +0000 UTC m=+929.464511812" lastFinishedPulling="2026-02-16 13:47:26.485299956 +0000 UTC m=+935.549630647" observedRunningTime="2026-02-16 13:47:26.992839729 +0000 UTC m=+936.057170430" watchObservedRunningTime="2026-02-16 13:47:26.996187254 +0000 UTC m=+936.060517955" Feb 16 13:47:29 crc kubenswrapper[4812]: I0216 13:47:29.949823 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-5dtvn" Feb 16 13:47:30 crc kubenswrapper[4812]: I0216 13:47:30.268034 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6868d88bbd-rwbhn" Feb 16 13:47:30 crc kubenswrapper[4812]: I0216 13:47:30.268486 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6868d88bbd-rwbhn" Feb 16 13:47:30 crc kubenswrapper[4812]: I0216 13:47:30.272998 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6868d88bbd-rwbhn" Feb 16 13:47:30 crc kubenswrapper[4812]: I0216 13:47:30.998528 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6868d88bbd-rwbhn" Feb 16 13:47:31 crc kubenswrapper[4812]: I0216 13:47:31.046933 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-tpgqc"] Feb 16 13:47:40 crc kubenswrapper[4812]: I0216 13:47:40.512707 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-8kkws" Feb 16 13:47:44 crc kubenswrapper[4812]: I0216 13:47:44.548917 4812 patch_prober.go:28] interesting pod/machine-config-daemon-c6mn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:47:44 crc kubenswrapper[4812]: I0216 13:47:44.549225 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:47:55 crc kubenswrapper[4812]: I0216 13:47:55.166226 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213np9w6"] Feb 16 13:47:55 crc kubenswrapper[4812]: I0216 13:47:55.167840 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213np9w6" Feb 16 13:47:55 crc kubenswrapper[4812]: I0216 13:47:55.170061 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 13:47:55 crc kubenswrapper[4812]: I0216 13:47:55.176227 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213np9w6"] Feb 16 13:47:55 crc kubenswrapper[4812]: I0216 13:47:55.262009 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r2d8\" (UniqueName: \"kubernetes.io/projected/f7fc9c91-5507-47f3-a456-4e415f0fab79-kube-api-access-2r2d8\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213np9w6\" (UID: \"f7fc9c91-5507-47f3-a456-4e415f0fab79\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213np9w6" Feb 16 13:47:55 crc kubenswrapper[4812]: I0216 13:47:55.262090 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7fc9c91-5507-47f3-a456-4e415f0fab79-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213np9w6\" (UID: \"f7fc9c91-5507-47f3-a456-4e415f0fab79\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213np9w6" Feb 16 13:47:55 crc kubenswrapper[4812]: I0216 13:47:55.262137 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7fc9c91-5507-47f3-a456-4e415f0fab79-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213np9w6\" (UID: \"f7fc9c91-5507-47f3-a456-4e415f0fab79\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213np9w6" Feb 16 13:47:55 crc kubenswrapper[4812]: I0216 13:47:55.362972 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r2d8\" (UniqueName: \"kubernetes.io/projected/f7fc9c91-5507-47f3-a456-4e415f0fab79-kube-api-access-2r2d8\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213np9w6\" (UID: \"f7fc9c91-5507-47f3-a456-4e415f0fab79\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213np9w6" Feb 16 13:47:55 crc kubenswrapper[4812]: I0216 13:47:55.363051 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7fc9c91-5507-47f3-a456-4e415f0fab79-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213np9w6\" (UID: \"f7fc9c91-5507-47f3-a456-4e415f0fab79\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213np9w6" Feb 16 13:47:55 crc kubenswrapper[4812]: I0216 13:47:55.363104 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7fc9c91-5507-47f3-a456-4e415f0fab79-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213np9w6\" (UID: \"f7fc9c91-5507-47f3-a456-4e415f0fab79\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213np9w6" Feb 16 13:47:55 crc kubenswrapper[4812]: I0216 13:47:55.363712 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7fc9c91-5507-47f3-a456-4e415f0fab79-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213np9w6\" (UID: \"f7fc9c91-5507-47f3-a456-4e415f0fab79\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213np9w6" Feb 16 13:47:55 crc kubenswrapper[4812]: I0216 13:47:55.363794 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7fc9c91-5507-47f3-a456-4e415f0fab79-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213np9w6\" (UID: \"f7fc9c91-5507-47f3-a456-4e415f0fab79\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213np9w6" Feb 16 13:47:55 crc kubenswrapper[4812]: I0216 13:47:55.386465 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r2d8\" (UniqueName: \"kubernetes.io/projected/f7fc9c91-5507-47f3-a456-4e415f0fab79-kube-api-access-2r2d8\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213np9w6\" (UID: \"f7fc9c91-5507-47f3-a456-4e415f0fab79\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213np9w6" Feb 16 13:47:55 crc kubenswrapper[4812]: I0216 13:47:55.483345 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213np9w6" Feb 16 13:47:55 crc kubenswrapper[4812]: I0216 13:47:55.955570 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213np9w6"] Feb 16 13:47:55 crc kubenswrapper[4812]: W0216 13:47:55.963710 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7fc9c91_5507_47f3_a456_4e415f0fab79.slice/crio-c755005e33887a768df871f668373a5c691475e6362006dab1e3b8478621f516 WatchSource:0}: Error finding container c755005e33887a768df871f668373a5c691475e6362006dab1e3b8478621f516: Status 404 returned error can't find the container with id c755005e33887a768df871f668373a5c691475e6362006dab1e3b8478621f516 Feb 16 13:47:56 crc kubenswrapper[4812]: I0216 13:47:56.092394 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-tpgqc" podUID="d8f24d90-54d8-4344-8140-c9fa919b456a" containerName="console" containerID="cri-o://44a7b19ee043c83429b1625f0f31f28cfa515c2e5745104244c3e3557f6bdfdb" gracePeriod=15 Feb 16 13:47:56 crc kubenswrapper[4812]: I0216 13:47:56.218063 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213np9w6" event={"ID":"f7fc9c91-5507-47f3-a456-4e415f0fab79","Type":"ContainerStarted","Data":"c755005e33887a768df871f668373a5c691475e6362006dab1e3b8478621f516"} Feb 16 13:47:56 crc kubenswrapper[4812]: I0216 13:47:56.220380 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-tpgqc_d8f24d90-54d8-4344-8140-c9fa919b456a/console/0.log" Feb 16 13:47:56 crc kubenswrapper[4812]: I0216 13:47:56.220416 4812 generic.go:334] "Generic (PLEG): container finished" podID="d8f24d90-54d8-4344-8140-c9fa919b456a" containerID="44a7b19ee043c83429b1625f0f31f28cfa515c2e5745104244c3e3557f6bdfdb" exitCode=2 Feb 16 13:47:56 crc kubenswrapper[4812]: I0216 13:47:56.220437 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-tpgqc" event={"ID":"d8f24d90-54d8-4344-8140-c9fa919b456a","Type":"ContainerDied","Data":"44a7b19ee043c83429b1625f0f31f28cfa515c2e5745104244c3e3557f6bdfdb"} Feb 16 13:47:56 crc kubenswrapper[4812]: I0216 13:47:56.859953 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-tpgqc_d8f24d90-54d8-4344-8140-c9fa919b456a/console/0.log" Feb 16 13:47:56 crc kubenswrapper[4812]: I0216 13:47:56.860037 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-tpgqc" Feb 16 13:47:56 crc kubenswrapper[4812]: I0216 13:47:56.924901 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d8f24d90-54d8-4344-8140-c9fa919b456a-service-ca\") pod \"d8f24d90-54d8-4344-8140-c9fa919b456a\" (UID: \"d8f24d90-54d8-4344-8140-c9fa919b456a\") " Feb 16 13:47:56 crc kubenswrapper[4812]: I0216 13:47:56.924979 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d8f24d90-54d8-4344-8140-c9fa919b456a-console-serving-cert\") pod \"d8f24d90-54d8-4344-8140-c9fa919b456a\" (UID: \"d8f24d90-54d8-4344-8140-c9fa919b456a\") " Feb 16 13:47:56 crc kubenswrapper[4812]: I0216 13:47:56.925020 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d8f24d90-54d8-4344-8140-c9fa919b456a-console-config\") pod \"d8f24d90-54d8-4344-8140-c9fa919b456a\" (UID: \"d8f24d90-54d8-4344-8140-c9fa919b456a\") " Feb 16 13:47:56 crc kubenswrapper[4812]: I0216 13:47:56.925048 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d8f24d90-54d8-4344-8140-c9fa919b456a-oauth-serving-cert\") pod \"d8f24d90-54d8-4344-8140-c9fa919b456a\" (UID: \"d8f24d90-54d8-4344-8140-c9fa919b456a\") " Feb 16 13:47:56 crc kubenswrapper[4812]: I0216 13:47:56.925076 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdcp5\" (UniqueName: \"kubernetes.io/projected/d8f24d90-54d8-4344-8140-c9fa919b456a-kube-api-access-bdcp5\") pod \"d8f24d90-54d8-4344-8140-c9fa919b456a\" (UID: \"d8f24d90-54d8-4344-8140-c9fa919b456a\") " Feb 16 13:47:56 crc kubenswrapper[4812]: I0216 13:47:56.925115 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8f24d90-54d8-4344-8140-c9fa919b456a-trusted-ca-bundle\") pod \"d8f24d90-54d8-4344-8140-c9fa919b456a\" (UID: \"d8f24d90-54d8-4344-8140-c9fa919b456a\") " Feb 16 13:47:56 crc kubenswrapper[4812]: I0216 13:47:56.925137 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d8f24d90-54d8-4344-8140-c9fa919b456a-console-oauth-config\") pod \"d8f24d90-54d8-4344-8140-c9fa919b456a\" (UID: \"d8f24d90-54d8-4344-8140-c9fa919b456a\") " Feb 16 13:47:56 crc kubenswrapper[4812]: I0216 13:47:56.925843 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8f24d90-54d8-4344-8140-c9fa919b456a-service-ca" (OuterVolumeSpecName: "service-ca") pod "d8f24d90-54d8-4344-8140-c9fa919b456a" (UID: "d8f24d90-54d8-4344-8140-c9fa919b456a"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:47:56 crc kubenswrapper[4812]: I0216 13:47:56.925980 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8f24d90-54d8-4344-8140-c9fa919b456a-console-config" (OuterVolumeSpecName: "console-config") pod "d8f24d90-54d8-4344-8140-c9fa919b456a" (UID: "d8f24d90-54d8-4344-8140-c9fa919b456a"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:47:56 crc kubenswrapper[4812]: I0216 13:47:56.926697 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8f24d90-54d8-4344-8140-c9fa919b456a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d8f24d90-54d8-4344-8140-c9fa919b456a" (UID: "d8f24d90-54d8-4344-8140-c9fa919b456a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:47:56 crc kubenswrapper[4812]: I0216 13:47:56.926725 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8f24d90-54d8-4344-8140-c9fa919b456a-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "d8f24d90-54d8-4344-8140-c9fa919b456a" (UID: "d8f24d90-54d8-4344-8140-c9fa919b456a"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:47:56 crc kubenswrapper[4812]: I0216 13:47:56.946179 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8f24d90-54d8-4344-8140-c9fa919b456a-kube-api-access-bdcp5" (OuterVolumeSpecName: "kube-api-access-bdcp5") pod "d8f24d90-54d8-4344-8140-c9fa919b456a" (UID: "d8f24d90-54d8-4344-8140-c9fa919b456a"). InnerVolumeSpecName "kube-api-access-bdcp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:47:56 crc kubenswrapper[4812]: I0216 13:47:56.952473 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8f24d90-54d8-4344-8140-c9fa919b456a-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "d8f24d90-54d8-4344-8140-c9fa919b456a" (UID: "d8f24d90-54d8-4344-8140-c9fa919b456a"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:47:56 crc kubenswrapper[4812]: I0216 13:47:56.953872 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8f24d90-54d8-4344-8140-c9fa919b456a-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "d8f24d90-54d8-4344-8140-c9fa919b456a" (UID: "d8f24d90-54d8-4344-8140-c9fa919b456a"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:47:57 crc kubenswrapper[4812]: I0216 13:47:57.026053 4812 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d8f24d90-54d8-4344-8140-c9fa919b456a-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 13:47:57 crc kubenswrapper[4812]: I0216 13:47:57.026088 4812 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d8f24d90-54d8-4344-8140-c9fa919b456a-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:47:57 crc kubenswrapper[4812]: I0216 13:47:57.026101 4812 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d8f24d90-54d8-4344-8140-c9fa919b456a-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:47:57 crc kubenswrapper[4812]: I0216 13:47:57.026111 4812 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d8f24d90-54d8-4344-8140-c9fa919b456a-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 13:47:57 crc kubenswrapper[4812]: I0216 13:47:57.026120 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdcp5\" (UniqueName: \"kubernetes.io/projected/d8f24d90-54d8-4344-8140-c9fa919b456a-kube-api-access-bdcp5\") on node \"crc\" DevicePath \"\"" Feb 16 13:47:57 crc kubenswrapper[4812]: I0216 13:47:57.026129 4812 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8f24d90-54d8-4344-8140-c9fa919b456a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:47:57 crc kubenswrapper[4812]: I0216 13:47:57.026137 4812 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d8f24d90-54d8-4344-8140-c9fa919b456a-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:47:57 crc kubenswrapper[4812]: I0216 13:47:57.229622 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-tpgqc_d8f24d90-54d8-4344-8140-c9fa919b456a/console/0.log" Feb 16 13:47:57 crc kubenswrapper[4812]: I0216 13:47:57.229724 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-tpgqc" event={"ID":"d8f24d90-54d8-4344-8140-c9fa919b456a","Type":"ContainerDied","Data":"c9a4a8fa70f753527518324fed561af1b274e0616d6a4cdded6e757866a0c53e"} Feb 16 13:47:57 crc kubenswrapper[4812]: I0216 13:47:57.229754 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-tpgqc" Feb 16 13:47:57 crc kubenswrapper[4812]: I0216 13:47:57.229769 4812 scope.go:117] "RemoveContainer" containerID="44a7b19ee043c83429b1625f0f31f28cfa515c2e5745104244c3e3557f6bdfdb" Feb 16 13:47:57 crc kubenswrapper[4812]: I0216 13:47:57.231601 4812 generic.go:334] "Generic (PLEG): container finished" podID="f7fc9c91-5507-47f3-a456-4e415f0fab79" containerID="cc3c7a57831c6fb4fcf730ee8f7779a7567ba7d475ab207756c783155fc86140" exitCode=0 Feb 16 13:47:57 crc kubenswrapper[4812]: I0216 13:47:57.231657 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213np9w6" event={"ID":"f7fc9c91-5507-47f3-a456-4e415f0fab79","Type":"ContainerDied","Data":"cc3c7a57831c6fb4fcf730ee8f7779a7567ba7d475ab207756c783155fc86140"} Feb 16 13:47:57 crc kubenswrapper[4812]: I0216 13:47:57.269812 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-tpgqc"] Feb 16 13:47:57 crc kubenswrapper[4812]: I0216 13:47:57.275155 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-tpgqc"] Feb 16 13:47:57 crc kubenswrapper[4812]: I0216 13:47:57.886834 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8f24d90-54d8-4344-8140-c9fa919b456a" path="/var/lib/kubelet/pods/d8f24d90-54d8-4344-8140-c9fa919b456a/volumes" Feb 16 13:47:59 crc kubenswrapper[4812]: I0216 13:47:59.246640 4812 generic.go:334] "Generic (PLEG): container finished" podID="f7fc9c91-5507-47f3-a456-4e415f0fab79" containerID="93893178ac3708eedbc53e3be3ee1df558bad362e8a4e9ea4f387c406840bc1a" exitCode=0 Feb 16 13:47:59 crc kubenswrapper[4812]: I0216 13:47:59.246912 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213np9w6" event={"ID":"f7fc9c91-5507-47f3-a456-4e415f0fab79","Type":"ContainerDied","Data":"93893178ac3708eedbc53e3be3ee1df558bad362e8a4e9ea4f387c406840bc1a"} Feb 16 13:48:00 crc kubenswrapper[4812]: I0216 13:48:00.255512 4812 generic.go:334] "Generic (PLEG): container finished" podID="f7fc9c91-5507-47f3-a456-4e415f0fab79" containerID="5b1b44ca9edebe2bc26483e9a1d7d0ccbb762d9bce9f3abe9ae6929953604023" exitCode=0 Feb 16 13:48:00 crc kubenswrapper[4812]: I0216 13:48:00.255561 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213np9w6" event={"ID":"f7fc9c91-5507-47f3-a456-4e415f0fab79","Type":"ContainerDied","Data":"5b1b44ca9edebe2bc26483e9a1d7d0ccbb762d9bce9f3abe9ae6929953604023"} Feb 16 13:48:01 crc kubenswrapper[4812]: I0216 13:48:01.863170 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213np9w6" Feb 16 13:48:02 crc kubenswrapper[4812]: I0216 13:48:02.040844 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7fc9c91-5507-47f3-a456-4e415f0fab79-bundle\") pod \"f7fc9c91-5507-47f3-a456-4e415f0fab79\" (UID: \"f7fc9c91-5507-47f3-a456-4e415f0fab79\") " Feb 16 13:48:02 crc kubenswrapper[4812]: I0216 13:48:02.041011 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2r2d8\" (UniqueName: \"kubernetes.io/projected/f7fc9c91-5507-47f3-a456-4e415f0fab79-kube-api-access-2r2d8\") pod \"f7fc9c91-5507-47f3-a456-4e415f0fab79\" (UID: \"f7fc9c91-5507-47f3-a456-4e415f0fab79\") " Feb 16 13:48:02 crc kubenswrapper[4812]: I0216 13:48:02.041049 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7fc9c91-5507-47f3-a456-4e415f0fab79-util\") pod \"f7fc9c91-5507-47f3-a456-4e415f0fab79\" (UID: \"f7fc9c91-5507-47f3-a456-4e415f0fab79\") " Feb 16 13:48:02 crc kubenswrapper[4812]: I0216 13:48:02.042134 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7fc9c91-5507-47f3-a456-4e415f0fab79-bundle" (OuterVolumeSpecName: "bundle") pod "f7fc9c91-5507-47f3-a456-4e415f0fab79" (UID: "f7fc9c91-5507-47f3-a456-4e415f0fab79"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:48:02 crc kubenswrapper[4812]: I0216 13:48:02.047936 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7fc9c91-5507-47f3-a456-4e415f0fab79-kube-api-access-2r2d8" (OuterVolumeSpecName: "kube-api-access-2r2d8") pod "f7fc9c91-5507-47f3-a456-4e415f0fab79" (UID: "f7fc9c91-5507-47f3-a456-4e415f0fab79"). InnerVolumeSpecName "kube-api-access-2r2d8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:48:02 crc kubenswrapper[4812]: I0216 13:48:02.052423 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7fc9c91-5507-47f3-a456-4e415f0fab79-util" (OuterVolumeSpecName: "util") pod "f7fc9c91-5507-47f3-a456-4e415f0fab79" (UID: "f7fc9c91-5507-47f3-a456-4e415f0fab79"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:48:02 crc kubenswrapper[4812]: I0216 13:48:02.142637 4812 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7fc9c91-5507-47f3-a456-4e415f0fab79-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:48:02 crc kubenswrapper[4812]: I0216 13:48:02.142681 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2r2d8\" (UniqueName: \"kubernetes.io/projected/f7fc9c91-5507-47f3-a456-4e415f0fab79-kube-api-access-2r2d8\") on node \"crc\" DevicePath \"\"" Feb 16 13:48:02 crc kubenswrapper[4812]: I0216 13:48:02.142691 4812 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7fc9c91-5507-47f3-a456-4e415f0fab79-util\") on node \"crc\" DevicePath \"\"" Feb 16 13:48:02 crc kubenswrapper[4812]: I0216 13:48:02.269150 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213np9w6" event={"ID":"f7fc9c91-5507-47f3-a456-4e415f0fab79","Type":"ContainerDied","Data":"c755005e33887a768df871f668373a5c691475e6362006dab1e3b8478621f516"} Feb 16 13:48:02 crc kubenswrapper[4812]: I0216 13:48:02.269206 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c755005e33887a768df871f668373a5c691475e6362006dab1e3b8478621f516" Feb 16 13:48:02 crc kubenswrapper[4812]: I0216 13:48:02.269285 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213np9w6" Feb 16 13:48:14 crc kubenswrapper[4812]: I0216 13:48:14.549671 4812 patch_prober.go:28] interesting pod/machine-config-daemon-c6mn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:48:14 crc kubenswrapper[4812]: I0216 13:48:14.550395 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:48:14 crc kubenswrapper[4812]: I0216 13:48:14.550474 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" Feb 16 13:48:14 crc kubenswrapper[4812]: I0216 13:48:14.551385 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7cdd40ec1858c86be76b1abaa1c0c47ea05268682d8c62fb36cfc403870db38c"} pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 13:48:14 crc kubenswrapper[4812]: I0216 13:48:14.551462 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" containerID="cri-o://7cdd40ec1858c86be76b1abaa1c0c47ea05268682d8c62fb36cfc403870db38c" gracePeriod=600 Feb 16 13:48:15 crc kubenswrapper[4812]: I0216 13:48:15.346469 4812 generic.go:334] "Generic (PLEG): container finished" podID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerID="7cdd40ec1858c86be76b1abaa1c0c47ea05268682d8c62fb36cfc403870db38c" exitCode=0 Feb 16 13:48:15 crc kubenswrapper[4812]: I0216 13:48:15.346555 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" event={"ID":"3c55e49a-a30d-4950-a690-c33d9f8a31e0","Type":"ContainerDied","Data":"7cdd40ec1858c86be76b1abaa1c0c47ea05268682d8c62fb36cfc403870db38c"} Feb 16 13:48:15 crc kubenswrapper[4812]: I0216 13:48:15.347022 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" event={"ID":"3c55e49a-a30d-4950-a690-c33d9f8a31e0","Type":"ContainerStarted","Data":"0779ef9b368371eaae022df11f7e6d3b1b2344936b30d611f68295ab80bea825"} Feb 16 13:48:15 crc kubenswrapper[4812]: I0216 13:48:15.347044 4812 scope.go:117] "RemoveContainer" containerID="69f6102fd067a315bb5fa977a52583563bb8e2109920c634e663dafde3b8d90e" Feb 16 13:48:15 crc kubenswrapper[4812]: I0216 13:48:15.858971 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7f8ffc447f-2c5xc"] Feb 16 13:48:15 crc kubenswrapper[4812]: E0216 13:48:15.859204 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7fc9c91-5507-47f3-a456-4e415f0fab79" containerName="pull" Feb 16 13:48:15 crc kubenswrapper[4812]: I0216 13:48:15.859215 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7fc9c91-5507-47f3-a456-4e415f0fab79" containerName="pull" Feb 16 13:48:15 crc kubenswrapper[4812]: E0216 13:48:15.859229 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8f24d90-54d8-4344-8140-c9fa919b456a" containerName="console" Feb 16 13:48:15 crc kubenswrapper[4812]: I0216 13:48:15.859235 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8f24d90-54d8-4344-8140-c9fa919b456a" containerName="console" Feb 16 13:48:15 crc kubenswrapper[4812]: E0216 13:48:15.859255 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7fc9c91-5507-47f3-a456-4e415f0fab79" containerName="util" Feb 16 13:48:15 crc kubenswrapper[4812]: I0216 13:48:15.859260 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7fc9c91-5507-47f3-a456-4e415f0fab79" containerName="util" Feb 16 13:48:15 crc kubenswrapper[4812]: E0216 13:48:15.859270 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7fc9c91-5507-47f3-a456-4e415f0fab79" containerName="extract" Feb 16 13:48:15 crc kubenswrapper[4812]: I0216 13:48:15.859276 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7fc9c91-5507-47f3-a456-4e415f0fab79" containerName="extract" Feb 16 13:48:15 crc kubenswrapper[4812]: I0216 13:48:15.859369 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8f24d90-54d8-4344-8140-c9fa919b456a" containerName="console" Feb 16 13:48:15 crc kubenswrapper[4812]: I0216 13:48:15.859388 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7fc9c91-5507-47f3-a456-4e415f0fab79" containerName="extract" Feb 16 13:48:15 crc kubenswrapper[4812]: I0216 13:48:15.859779 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7f8ffc447f-2c5xc" Feb 16 13:48:15 crc kubenswrapper[4812]: I0216 13:48:15.861947 4812 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-2kjv7" Feb 16 13:48:15 crc kubenswrapper[4812]: I0216 13:48:15.862611 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 16 13:48:15 crc kubenswrapper[4812]: I0216 13:48:15.862741 4812 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 16 13:48:15 crc kubenswrapper[4812]: I0216 13:48:15.862800 4812 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 16 13:48:15 crc kubenswrapper[4812]: I0216 13:48:15.862942 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 16 13:48:15 crc kubenswrapper[4812]: I0216 13:48:15.948391 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7f8ffc447f-2c5xc"] Feb 16 13:48:15 crc kubenswrapper[4812]: I0216 13:48:15.966531 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6746b0af-7980-47b3-bc36-374bc1bdc6d1-webhook-cert\") pod \"metallb-operator-controller-manager-7f8ffc447f-2c5xc\" (UID: \"6746b0af-7980-47b3-bc36-374bc1bdc6d1\") " pod="metallb-system/metallb-operator-controller-manager-7f8ffc447f-2c5xc" Feb 16 13:48:15 crc kubenswrapper[4812]: I0216 13:48:15.967179 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6746b0af-7980-47b3-bc36-374bc1bdc6d1-apiservice-cert\") pod \"metallb-operator-controller-manager-7f8ffc447f-2c5xc\" (UID: \"6746b0af-7980-47b3-bc36-374bc1bdc6d1\") " pod="metallb-system/metallb-operator-controller-manager-7f8ffc447f-2c5xc" Feb 16 13:48:15 crc kubenswrapper[4812]: I0216 13:48:15.967230 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ccwd\" (UniqueName: \"kubernetes.io/projected/6746b0af-7980-47b3-bc36-374bc1bdc6d1-kube-api-access-9ccwd\") pod \"metallb-operator-controller-manager-7f8ffc447f-2c5xc\" (UID: \"6746b0af-7980-47b3-bc36-374bc1bdc6d1\") " pod="metallb-system/metallb-operator-controller-manager-7f8ffc447f-2c5xc" Feb 16 13:48:16 crc kubenswrapper[4812]: I0216 13:48:16.067284 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qtrfh"] Feb 16 13:48:16 crc kubenswrapper[4812]: I0216 13:48:16.068270 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6746b0af-7980-47b3-bc36-374bc1bdc6d1-apiservice-cert\") pod \"metallb-operator-controller-manager-7f8ffc447f-2c5xc\" (UID: \"6746b0af-7980-47b3-bc36-374bc1bdc6d1\") " pod="metallb-system/metallb-operator-controller-manager-7f8ffc447f-2c5xc" Feb 16 13:48:16 crc kubenswrapper[4812]: I0216 13:48:16.068353 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ccwd\" (UniqueName: \"kubernetes.io/projected/6746b0af-7980-47b3-bc36-374bc1bdc6d1-kube-api-access-9ccwd\") pod \"metallb-operator-controller-manager-7f8ffc447f-2c5xc\" (UID: \"6746b0af-7980-47b3-bc36-374bc1bdc6d1\") " pod="metallb-system/metallb-operator-controller-manager-7f8ffc447f-2c5xc" Feb 16 13:48:16 crc kubenswrapper[4812]: I0216 13:48:16.068467 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6746b0af-7980-47b3-bc36-374bc1bdc6d1-webhook-cert\") pod \"metallb-operator-controller-manager-7f8ffc447f-2c5xc\" (UID: \"6746b0af-7980-47b3-bc36-374bc1bdc6d1\") " pod="metallb-system/metallb-operator-controller-manager-7f8ffc447f-2c5xc" Feb 16 13:48:16 crc kubenswrapper[4812]: I0216 13:48:16.068662 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qtrfh" Feb 16 13:48:16 crc kubenswrapper[4812]: I0216 13:48:16.077714 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6746b0af-7980-47b3-bc36-374bc1bdc6d1-webhook-cert\") pod \"metallb-operator-controller-manager-7f8ffc447f-2c5xc\" (UID: \"6746b0af-7980-47b3-bc36-374bc1bdc6d1\") " pod="metallb-system/metallb-operator-controller-manager-7f8ffc447f-2c5xc" Feb 16 13:48:16 crc kubenswrapper[4812]: I0216 13:48:16.077727 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6746b0af-7980-47b3-bc36-374bc1bdc6d1-apiservice-cert\") pod \"metallb-operator-controller-manager-7f8ffc447f-2c5xc\" (UID: \"6746b0af-7980-47b3-bc36-374bc1bdc6d1\") " pod="metallb-system/metallb-operator-controller-manager-7f8ffc447f-2c5xc" Feb 16 13:48:16 crc kubenswrapper[4812]: I0216 13:48:16.093298 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qtrfh"] Feb 16 13:48:16 crc kubenswrapper[4812]: I0216 13:48:16.113140 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ccwd\" (UniqueName: \"kubernetes.io/projected/6746b0af-7980-47b3-bc36-374bc1bdc6d1-kube-api-access-9ccwd\") pod \"metallb-operator-controller-manager-7f8ffc447f-2c5xc\" (UID: \"6746b0af-7980-47b3-bc36-374bc1bdc6d1\") " pod="metallb-system/metallb-operator-controller-manager-7f8ffc447f-2c5xc" Feb 16 13:48:16 crc kubenswrapper[4812]: I0216 13:48:16.170717 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5124a0e-4da6-4613-9e4e-fe91add1b2ea-catalog-content\") pod \"certified-operators-qtrfh\" (UID: \"c5124a0e-4da6-4613-9e4e-fe91add1b2ea\") " pod="openshift-marketplace/certified-operators-qtrfh" Feb 16 13:48:16 crc kubenswrapper[4812]: I0216 13:48:16.170852 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86qlm\" (UniqueName: \"kubernetes.io/projected/c5124a0e-4da6-4613-9e4e-fe91add1b2ea-kube-api-access-86qlm\") pod \"certified-operators-qtrfh\" (UID: \"c5124a0e-4da6-4613-9e4e-fe91add1b2ea\") " pod="openshift-marketplace/certified-operators-qtrfh" Feb 16 13:48:16 crc kubenswrapper[4812]: I0216 13:48:16.170915 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5124a0e-4da6-4613-9e4e-fe91add1b2ea-utilities\") pod \"certified-operators-qtrfh\" (UID: \"c5124a0e-4da6-4613-9e4e-fe91add1b2ea\") " pod="openshift-marketplace/certified-operators-qtrfh" Feb 16 13:48:16 crc kubenswrapper[4812]: I0216 13:48:16.177924 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7f8ffc447f-2c5xc" Feb 16 13:48:16 crc kubenswrapper[4812]: I0216 13:48:16.271785 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5124a0e-4da6-4613-9e4e-fe91add1b2ea-catalog-content\") pod \"certified-operators-qtrfh\" (UID: \"c5124a0e-4da6-4613-9e4e-fe91add1b2ea\") " pod="openshift-marketplace/certified-operators-qtrfh" Feb 16 13:48:16 crc kubenswrapper[4812]: I0216 13:48:16.271883 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86qlm\" (UniqueName: \"kubernetes.io/projected/c5124a0e-4da6-4613-9e4e-fe91add1b2ea-kube-api-access-86qlm\") pod \"certified-operators-qtrfh\" (UID: \"c5124a0e-4da6-4613-9e4e-fe91add1b2ea\") " pod="openshift-marketplace/certified-operators-qtrfh" Feb 16 13:48:16 crc kubenswrapper[4812]: I0216 13:48:16.271930 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5124a0e-4da6-4613-9e4e-fe91add1b2ea-utilities\") pod \"certified-operators-qtrfh\" (UID: \"c5124a0e-4da6-4613-9e4e-fe91add1b2ea\") " pod="openshift-marketplace/certified-operators-qtrfh" Feb 16 13:48:16 crc kubenswrapper[4812]: I0216 13:48:16.272265 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6d44948dbf-dlj6m"] Feb 16 13:48:16 crc kubenswrapper[4812]: I0216 13:48:16.272639 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5124a0e-4da6-4613-9e4e-fe91add1b2ea-catalog-content\") pod \"certified-operators-qtrfh\" (UID: \"c5124a0e-4da6-4613-9e4e-fe91add1b2ea\") " pod="openshift-marketplace/certified-operators-qtrfh" Feb 16 13:48:16 crc kubenswrapper[4812]: I0216 13:48:16.272812 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5124a0e-4da6-4613-9e4e-fe91add1b2ea-utilities\") pod \"certified-operators-qtrfh\" (UID: \"c5124a0e-4da6-4613-9e4e-fe91add1b2ea\") " pod="openshift-marketplace/certified-operators-qtrfh" Feb 16 13:48:16 crc kubenswrapper[4812]: I0216 13:48:16.273193 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6d44948dbf-dlj6m" Feb 16 13:48:16 crc kubenswrapper[4812]: I0216 13:48:16.275811 4812 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 16 13:48:16 crc kubenswrapper[4812]: I0216 13:48:16.276052 4812 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 16 13:48:16 crc kubenswrapper[4812]: I0216 13:48:16.276287 4812 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-wcxbx" Feb 16 13:48:16 crc kubenswrapper[4812]: I0216 13:48:16.288387 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6d44948dbf-dlj6m"] Feb 16 13:48:16 crc kubenswrapper[4812]: I0216 13:48:16.297528 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86qlm\" (UniqueName: \"kubernetes.io/projected/c5124a0e-4da6-4613-9e4e-fe91add1b2ea-kube-api-access-86qlm\") pod \"certified-operators-qtrfh\" (UID: \"c5124a0e-4da6-4613-9e4e-fe91add1b2ea\") " pod="openshift-marketplace/certified-operators-qtrfh" Feb 16 13:48:16 crc kubenswrapper[4812]: I0216 13:48:16.375158 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5gvw\" (UniqueName: \"kubernetes.io/projected/4567903c-04af-432e-8c9d-7e7150f94226-kube-api-access-z5gvw\") pod \"metallb-operator-webhook-server-6d44948dbf-dlj6m\" (UID: \"4567903c-04af-432e-8c9d-7e7150f94226\") " pod="metallb-system/metallb-operator-webhook-server-6d44948dbf-dlj6m" Feb 16 13:48:16 crc kubenswrapper[4812]: I0216 13:48:16.375197 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4567903c-04af-432e-8c9d-7e7150f94226-apiservice-cert\") pod \"metallb-operator-webhook-server-6d44948dbf-dlj6m\" (UID: \"4567903c-04af-432e-8c9d-7e7150f94226\") " pod="metallb-system/metallb-operator-webhook-server-6d44948dbf-dlj6m" Feb 16 13:48:16 crc kubenswrapper[4812]: I0216 13:48:16.375249 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4567903c-04af-432e-8c9d-7e7150f94226-webhook-cert\") pod \"metallb-operator-webhook-server-6d44948dbf-dlj6m\" (UID: \"4567903c-04af-432e-8c9d-7e7150f94226\") " pod="metallb-system/metallb-operator-webhook-server-6d44948dbf-dlj6m" Feb 16 13:48:16 crc kubenswrapper[4812]: I0216 13:48:16.456012 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qtrfh" Feb 16 13:48:16 crc kubenswrapper[4812]: I0216 13:48:16.477757 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4567903c-04af-432e-8c9d-7e7150f94226-webhook-cert\") pod \"metallb-operator-webhook-server-6d44948dbf-dlj6m\" (UID: \"4567903c-04af-432e-8c9d-7e7150f94226\") " pod="metallb-system/metallb-operator-webhook-server-6d44948dbf-dlj6m" Feb 16 13:48:16 crc kubenswrapper[4812]: I0216 13:48:16.477921 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5gvw\" (UniqueName: \"kubernetes.io/projected/4567903c-04af-432e-8c9d-7e7150f94226-kube-api-access-z5gvw\") pod \"metallb-operator-webhook-server-6d44948dbf-dlj6m\" (UID: \"4567903c-04af-432e-8c9d-7e7150f94226\") " pod="metallb-system/metallb-operator-webhook-server-6d44948dbf-dlj6m" Feb 16 13:48:16 crc kubenswrapper[4812]: I0216 13:48:16.477970 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4567903c-04af-432e-8c9d-7e7150f94226-apiservice-cert\") pod \"metallb-operator-webhook-server-6d44948dbf-dlj6m\" (UID: \"4567903c-04af-432e-8c9d-7e7150f94226\") " pod="metallb-system/metallb-operator-webhook-server-6d44948dbf-dlj6m" Feb 16 13:48:16 crc kubenswrapper[4812]: I0216 13:48:16.483089 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4567903c-04af-432e-8c9d-7e7150f94226-apiservice-cert\") pod \"metallb-operator-webhook-server-6d44948dbf-dlj6m\" (UID: \"4567903c-04af-432e-8c9d-7e7150f94226\") " pod="metallb-system/metallb-operator-webhook-server-6d44948dbf-dlj6m" Feb 16 13:48:16 crc kubenswrapper[4812]: I0216 13:48:16.487896 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4567903c-04af-432e-8c9d-7e7150f94226-webhook-cert\") pod \"metallb-operator-webhook-server-6d44948dbf-dlj6m\" (UID: \"4567903c-04af-432e-8c9d-7e7150f94226\") " pod="metallb-system/metallb-operator-webhook-server-6d44948dbf-dlj6m" Feb 16 13:48:16 crc kubenswrapper[4812]: I0216 13:48:16.511027 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5gvw\" (UniqueName: \"kubernetes.io/projected/4567903c-04af-432e-8c9d-7e7150f94226-kube-api-access-z5gvw\") pod \"metallb-operator-webhook-server-6d44948dbf-dlj6m\" (UID: \"4567903c-04af-432e-8c9d-7e7150f94226\") " pod="metallb-system/metallb-operator-webhook-server-6d44948dbf-dlj6m" Feb 16 13:48:16 crc kubenswrapper[4812]: I0216 13:48:16.613947 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6d44948dbf-dlj6m" Feb 16 13:48:16 crc kubenswrapper[4812]: I0216 13:48:16.768814 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7f8ffc447f-2c5xc"] Feb 16 13:48:16 crc kubenswrapper[4812]: I0216 13:48:16.857143 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qtrfh"] Feb 16 13:48:17 crc kubenswrapper[4812]: I0216 13:48:17.173807 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6d44948dbf-dlj6m"] Feb 16 13:48:17 crc kubenswrapper[4812]: I0216 13:48:17.372800 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6d44948dbf-dlj6m" event={"ID":"4567903c-04af-432e-8c9d-7e7150f94226","Type":"ContainerStarted","Data":"b20944ab0b685393c6e1da9c402a8fe471c248e0772a722c7fb2a97d480d69c2"} Feb 16 13:48:17 crc kubenswrapper[4812]: I0216 13:48:17.373928 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7f8ffc447f-2c5xc" event={"ID":"6746b0af-7980-47b3-bc36-374bc1bdc6d1","Type":"ContainerStarted","Data":"f18119a8945fa01805523847fd7bd53e0c556c789edba976047a061212ee545b"} Feb 16 13:48:17 crc kubenswrapper[4812]: I0216 13:48:17.375609 4812 generic.go:334] "Generic (PLEG): container finished" podID="c5124a0e-4da6-4613-9e4e-fe91add1b2ea" containerID="a45df0c7b8e853cf33cf50b1040537a78144f982d3576c96e67da225aac8faca" exitCode=0 Feb 16 13:48:17 crc kubenswrapper[4812]: I0216 13:48:17.375644 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qtrfh" event={"ID":"c5124a0e-4da6-4613-9e4e-fe91add1b2ea","Type":"ContainerDied","Data":"a45df0c7b8e853cf33cf50b1040537a78144f982d3576c96e67da225aac8faca"} Feb 16 13:48:17 crc kubenswrapper[4812]: I0216 13:48:17.375662 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qtrfh" event={"ID":"c5124a0e-4da6-4613-9e4e-fe91add1b2ea","Type":"ContainerStarted","Data":"10ba5da3587901b11a448d6525fa2379523822749f46c82b9564a462869c6b10"} Feb 16 13:48:18 crc kubenswrapper[4812]: I0216 13:48:18.385960 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qtrfh" event={"ID":"c5124a0e-4da6-4613-9e4e-fe91add1b2ea","Type":"ContainerStarted","Data":"2e7e3c4e9803f14810823ab0b1b28f9d2700e43daf30fc2e0313599048310c2c"} Feb 16 13:48:19 crc kubenswrapper[4812]: I0216 13:48:19.400475 4812 generic.go:334] "Generic (PLEG): container finished" podID="c5124a0e-4da6-4613-9e4e-fe91add1b2ea" containerID="2e7e3c4e9803f14810823ab0b1b28f9d2700e43daf30fc2e0313599048310c2c" exitCode=0 Feb 16 13:48:19 crc kubenswrapper[4812]: I0216 13:48:19.400526 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qtrfh" event={"ID":"c5124a0e-4da6-4613-9e4e-fe91add1b2ea","Type":"ContainerDied","Data":"2e7e3c4e9803f14810823ab0b1b28f9d2700e43daf30fc2e0313599048310c2c"} Feb 16 13:48:20 crc kubenswrapper[4812]: I0216 13:48:20.420254 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qtrfh" event={"ID":"c5124a0e-4da6-4613-9e4e-fe91add1b2ea","Type":"ContainerStarted","Data":"9889a1d541a40673c8960e5f21c6efa79ff1ce84427d5f86cf1039cb170b6421"} Feb 16 13:48:20 crc kubenswrapper[4812]: I0216 13:48:20.444924 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qtrfh" podStartSLOduration=1.9051226589999999 podStartE2EDuration="4.44490843s" podCreationTimestamp="2026-02-16 13:48:16 +0000 UTC" firstStartedPulling="2026-02-16 13:48:17.377201427 +0000 UTC m=+986.441532128" lastFinishedPulling="2026-02-16 13:48:19.916987198 +0000 UTC m=+988.981317899" observedRunningTime="2026-02-16 13:48:20.4434964 +0000 UTC m=+989.507827111" watchObservedRunningTime="2026-02-16 13:48:20.44490843 +0000 UTC m=+989.509239131" Feb 16 13:48:26 crc kubenswrapper[4812]: I0216 13:48:26.456596 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qtrfh" Feb 16 13:48:26 crc kubenswrapper[4812]: I0216 13:48:26.457135 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qtrfh" Feb 16 13:48:26 crc kubenswrapper[4812]: I0216 13:48:26.497030 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qtrfh" Feb 16 13:48:26 crc kubenswrapper[4812]: I0216 13:48:26.693098 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qtrfh" Feb 16 13:48:27 crc kubenswrapper[4812]: I0216 13:48:27.173466 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qtrfh"] Feb 16 13:48:27 crc kubenswrapper[4812]: I0216 13:48:27.661782 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6d44948dbf-dlj6m" event={"ID":"4567903c-04af-432e-8c9d-7e7150f94226","Type":"ContainerStarted","Data":"bb503f5029fc09c59dcabd0f71f62d99e6b19bdbbeb61880b6aee76632fa8409"} Feb 16 13:48:27 crc kubenswrapper[4812]: I0216 13:48:27.661854 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6d44948dbf-dlj6m" Feb 16 13:48:27 crc kubenswrapper[4812]: I0216 13:48:27.663383 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7f8ffc447f-2c5xc" event={"ID":"6746b0af-7980-47b3-bc36-374bc1bdc6d1","Type":"ContainerStarted","Data":"63ff7120ae1ebb148859871157397b468a8a55f689f4bef2aef5d07c2751291e"} Feb 16 13:48:27 crc kubenswrapper[4812]: I0216 13:48:27.685669 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6d44948dbf-dlj6m" podStartSLOduration=2.100458224 podStartE2EDuration="11.685647187s" podCreationTimestamp="2026-02-16 13:48:16 +0000 UTC" firstStartedPulling="2026-02-16 13:48:17.182177981 +0000 UTC m=+986.246508682" lastFinishedPulling="2026-02-16 13:48:26.767366944 +0000 UTC m=+995.831697645" observedRunningTime="2026-02-16 13:48:27.683105004 +0000 UTC m=+996.747435735" watchObservedRunningTime="2026-02-16 13:48:27.685647187 +0000 UTC m=+996.749977888" Feb 16 13:48:27 crc kubenswrapper[4812]: I0216 13:48:27.706474 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7f8ffc447f-2c5xc" podStartSLOduration=2.749825233 podStartE2EDuration="12.706432758s" podCreationTimestamp="2026-02-16 13:48:15 +0000 UTC" firstStartedPulling="2026-02-16 13:48:16.804274325 +0000 UTC m=+985.868605026" lastFinishedPulling="2026-02-16 13:48:26.76088184 +0000 UTC m=+995.825212551" observedRunningTime="2026-02-16 13:48:27.700821698 +0000 UTC m=+996.765152419" watchObservedRunningTime="2026-02-16 13:48:27.706432758 +0000 UTC m=+996.770763459" Feb 16 13:48:28 crc kubenswrapper[4812]: I0216 13:48:28.669764 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qtrfh" podUID="c5124a0e-4da6-4613-9e4e-fe91add1b2ea" containerName="registry-server" containerID="cri-o://9889a1d541a40673c8960e5f21c6efa79ff1ce84427d5f86cf1039cb170b6421" gracePeriod=2 Feb 16 13:48:28 crc kubenswrapper[4812]: I0216 13:48:28.670318 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7f8ffc447f-2c5xc" Feb 16 13:48:29 crc kubenswrapper[4812]: I0216 13:48:29.044146 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qtrfh" Feb 16 13:48:29 crc kubenswrapper[4812]: I0216 13:48:29.142058 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5124a0e-4da6-4613-9e4e-fe91add1b2ea-catalog-content\") pod \"c5124a0e-4da6-4613-9e4e-fe91add1b2ea\" (UID: \"c5124a0e-4da6-4613-9e4e-fe91add1b2ea\") " Feb 16 13:48:29 crc kubenswrapper[4812]: I0216 13:48:29.142134 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5124a0e-4da6-4613-9e4e-fe91add1b2ea-utilities\") pod \"c5124a0e-4da6-4613-9e4e-fe91add1b2ea\" (UID: \"c5124a0e-4da6-4613-9e4e-fe91add1b2ea\") " Feb 16 13:48:29 crc kubenswrapper[4812]: I0216 13:48:29.142253 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86qlm\" (UniqueName: \"kubernetes.io/projected/c5124a0e-4da6-4613-9e4e-fe91add1b2ea-kube-api-access-86qlm\") pod \"c5124a0e-4da6-4613-9e4e-fe91add1b2ea\" (UID: \"c5124a0e-4da6-4613-9e4e-fe91add1b2ea\") " Feb 16 13:48:29 crc kubenswrapper[4812]: I0216 13:48:29.143962 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5124a0e-4da6-4613-9e4e-fe91add1b2ea-utilities" (OuterVolumeSpecName: "utilities") pod "c5124a0e-4da6-4613-9e4e-fe91add1b2ea" (UID: "c5124a0e-4da6-4613-9e4e-fe91add1b2ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:48:29 crc kubenswrapper[4812]: I0216 13:48:29.148646 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5124a0e-4da6-4613-9e4e-fe91add1b2ea-kube-api-access-86qlm" (OuterVolumeSpecName: "kube-api-access-86qlm") pod "c5124a0e-4da6-4613-9e4e-fe91add1b2ea" (UID: "c5124a0e-4da6-4613-9e4e-fe91add1b2ea"). InnerVolumeSpecName "kube-api-access-86qlm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:48:29 crc kubenswrapper[4812]: I0216 13:48:29.195427 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5124a0e-4da6-4613-9e4e-fe91add1b2ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c5124a0e-4da6-4613-9e4e-fe91add1b2ea" (UID: "c5124a0e-4da6-4613-9e4e-fe91add1b2ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:48:29 crc kubenswrapper[4812]: I0216 13:48:29.244305 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86qlm\" (UniqueName: \"kubernetes.io/projected/c5124a0e-4da6-4613-9e4e-fe91add1b2ea-kube-api-access-86qlm\") on node \"crc\" DevicePath \"\"" Feb 16 13:48:29 crc kubenswrapper[4812]: I0216 13:48:29.244595 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5124a0e-4da6-4613-9e4e-fe91add1b2ea-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:48:29 crc kubenswrapper[4812]: I0216 13:48:29.244607 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5124a0e-4da6-4613-9e4e-fe91add1b2ea-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:48:29 crc kubenswrapper[4812]: I0216 13:48:29.678628 4812 generic.go:334] "Generic (PLEG): container finished" podID="c5124a0e-4da6-4613-9e4e-fe91add1b2ea" containerID="9889a1d541a40673c8960e5f21c6efa79ff1ce84427d5f86cf1039cb170b6421" exitCode=0 Feb 16 13:48:29 crc kubenswrapper[4812]: I0216 13:48:29.678706 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qtrfh" Feb 16 13:48:29 crc kubenswrapper[4812]: I0216 13:48:29.678771 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qtrfh" event={"ID":"c5124a0e-4da6-4613-9e4e-fe91add1b2ea","Type":"ContainerDied","Data":"9889a1d541a40673c8960e5f21c6efa79ff1ce84427d5f86cf1039cb170b6421"} Feb 16 13:48:29 crc kubenswrapper[4812]: I0216 13:48:29.678842 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qtrfh" event={"ID":"c5124a0e-4da6-4613-9e4e-fe91add1b2ea","Type":"ContainerDied","Data":"10ba5da3587901b11a448d6525fa2379523822749f46c82b9564a462869c6b10"} Feb 16 13:48:29 crc kubenswrapper[4812]: I0216 13:48:29.678869 4812 scope.go:117] "RemoveContainer" containerID="9889a1d541a40673c8960e5f21c6efa79ff1ce84427d5f86cf1039cb170b6421" Feb 16 13:48:29 crc kubenswrapper[4812]: I0216 13:48:29.700042 4812 scope.go:117] "RemoveContainer" containerID="2e7e3c4e9803f14810823ab0b1b28f9d2700e43daf30fc2e0313599048310c2c" Feb 16 13:48:29 crc kubenswrapper[4812]: I0216 13:48:29.712003 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qtrfh"] Feb 16 13:48:29 crc kubenswrapper[4812]: I0216 13:48:29.716124 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qtrfh"] Feb 16 13:48:29 crc kubenswrapper[4812]: I0216 13:48:29.730798 4812 scope.go:117] "RemoveContainer" containerID="a45df0c7b8e853cf33cf50b1040537a78144f982d3576c96e67da225aac8faca" Feb 16 13:48:29 crc kubenswrapper[4812]: I0216 13:48:29.744872 4812 scope.go:117] "RemoveContainer" containerID="9889a1d541a40673c8960e5f21c6efa79ff1ce84427d5f86cf1039cb170b6421" Feb 16 13:48:29 crc kubenswrapper[4812]: E0216 13:48:29.745310 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9889a1d541a40673c8960e5f21c6efa79ff1ce84427d5f86cf1039cb170b6421\": container with ID starting with 9889a1d541a40673c8960e5f21c6efa79ff1ce84427d5f86cf1039cb170b6421 not found: ID does not exist" containerID="9889a1d541a40673c8960e5f21c6efa79ff1ce84427d5f86cf1039cb170b6421" Feb 16 13:48:29 crc kubenswrapper[4812]: I0216 13:48:29.745337 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9889a1d541a40673c8960e5f21c6efa79ff1ce84427d5f86cf1039cb170b6421"} err="failed to get container status \"9889a1d541a40673c8960e5f21c6efa79ff1ce84427d5f86cf1039cb170b6421\": rpc error: code = NotFound desc = could not find container \"9889a1d541a40673c8960e5f21c6efa79ff1ce84427d5f86cf1039cb170b6421\": container with ID starting with 9889a1d541a40673c8960e5f21c6efa79ff1ce84427d5f86cf1039cb170b6421 not found: ID does not exist" Feb 16 13:48:29 crc kubenswrapper[4812]: I0216 13:48:29.745358 4812 scope.go:117] "RemoveContainer" containerID="2e7e3c4e9803f14810823ab0b1b28f9d2700e43daf30fc2e0313599048310c2c" Feb 16 13:48:29 crc kubenswrapper[4812]: E0216 13:48:29.745809 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e7e3c4e9803f14810823ab0b1b28f9d2700e43daf30fc2e0313599048310c2c\": container with ID starting with 2e7e3c4e9803f14810823ab0b1b28f9d2700e43daf30fc2e0313599048310c2c not found: ID does not exist" containerID="2e7e3c4e9803f14810823ab0b1b28f9d2700e43daf30fc2e0313599048310c2c" Feb 16 13:48:29 crc kubenswrapper[4812]: I0216 13:48:29.745843 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e7e3c4e9803f14810823ab0b1b28f9d2700e43daf30fc2e0313599048310c2c"} err="failed to get container status \"2e7e3c4e9803f14810823ab0b1b28f9d2700e43daf30fc2e0313599048310c2c\": rpc error: code = NotFound desc = could not find container \"2e7e3c4e9803f14810823ab0b1b28f9d2700e43daf30fc2e0313599048310c2c\": container with ID starting with 2e7e3c4e9803f14810823ab0b1b28f9d2700e43daf30fc2e0313599048310c2c not found: ID does not exist" Feb 16 13:48:29 crc kubenswrapper[4812]: I0216 13:48:29.745861 4812 scope.go:117] "RemoveContainer" containerID="a45df0c7b8e853cf33cf50b1040537a78144f982d3576c96e67da225aac8faca" Feb 16 13:48:29 crc kubenswrapper[4812]: E0216 13:48:29.746232 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a45df0c7b8e853cf33cf50b1040537a78144f982d3576c96e67da225aac8faca\": container with ID starting with a45df0c7b8e853cf33cf50b1040537a78144f982d3576c96e67da225aac8faca not found: ID does not exist" containerID="a45df0c7b8e853cf33cf50b1040537a78144f982d3576c96e67da225aac8faca" Feb 16 13:48:29 crc kubenswrapper[4812]: I0216 13:48:29.746254 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a45df0c7b8e853cf33cf50b1040537a78144f982d3576c96e67da225aac8faca"} err="failed to get container status \"a45df0c7b8e853cf33cf50b1040537a78144f982d3576c96e67da225aac8faca\": rpc error: code = NotFound desc = could not find container \"a45df0c7b8e853cf33cf50b1040537a78144f982d3576c96e67da225aac8faca\": container with ID starting with a45df0c7b8e853cf33cf50b1040537a78144f982d3576c96e67da225aac8faca not found: ID does not exist" Feb 16 13:48:29 crc kubenswrapper[4812]: I0216 13:48:29.888633 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5124a0e-4da6-4613-9e4e-fe91add1b2ea" path="/var/lib/kubelet/pods/c5124a0e-4da6-4613-9e4e-fe91add1b2ea/volumes" Feb 16 13:48:36 crc kubenswrapper[4812]: I0216 13:48:36.621534 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6d44948dbf-dlj6m" Feb 16 13:48:56 crc kubenswrapper[4812]: I0216 13:48:56.181945 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7f8ffc447f-2c5xc" Feb 16 13:48:56 crc kubenswrapper[4812]: I0216 13:48:56.917460 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-6zjgs"] Feb 16 13:48:56 crc kubenswrapper[4812]: E0216 13:48:56.917782 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5124a0e-4da6-4613-9e4e-fe91add1b2ea" containerName="extract-utilities" Feb 16 13:48:56 crc kubenswrapper[4812]: I0216 13:48:56.917803 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5124a0e-4da6-4613-9e4e-fe91add1b2ea" containerName="extract-utilities" Feb 16 13:48:56 crc kubenswrapper[4812]: E0216 13:48:56.917818 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5124a0e-4da6-4613-9e4e-fe91add1b2ea" containerName="registry-server" Feb 16 13:48:56 crc kubenswrapper[4812]: I0216 13:48:56.917824 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5124a0e-4da6-4613-9e4e-fe91add1b2ea" containerName="registry-server" Feb 16 13:48:56 crc kubenswrapper[4812]: E0216 13:48:56.917834 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5124a0e-4da6-4613-9e4e-fe91add1b2ea" containerName="extract-content" Feb 16 13:48:56 crc kubenswrapper[4812]: I0216 13:48:56.917840 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5124a0e-4da6-4613-9e4e-fe91add1b2ea" containerName="extract-content" Feb 16 13:48:56 crc kubenswrapper[4812]: I0216 13:48:56.917978 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5124a0e-4da6-4613-9e4e-fe91add1b2ea" containerName="registry-server" Feb 16 13:48:56 crc kubenswrapper[4812]: I0216 13:48:56.920541 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-6zjgs" Feb 16 13:48:56 crc kubenswrapper[4812]: I0216 13:48:56.922360 4812 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 16 13:48:56 crc kubenswrapper[4812]: I0216 13:48:56.922625 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 16 13:48:56 crc kubenswrapper[4812]: I0216 13:48:56.922823 4812 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-mwk4g" Feb 16 13:48:56 crc kubenswrapper[4812]: I0216 13:48:56.936983 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-2h7tc"] Feb 16 13:48:56 crc kubenswrapper[4812]: I0216 13:48:56.937823 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-2h7tc" Feb 16 13:48:56 crc kubenswrapper[4812]: I0216 13:48:56.940083 4812 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 16 13:48:56 crc kubenswrapper[4812]: I0216 13:48:56.958368 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-2h7tc"] Feb 16 13:48:56 crc kubenswrapper[4812]: I0216 13:48:56.992105 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/7aa158c4-bd4e-46d5-92f5-8635e722a673-reloader\") pod \"frr-k8s-6zjgs\" (UID: \"7aa158c4-bd4e-46d5-92f5-8635e722a673\") " pod="metallb-system/frr-k8s-6zjgs" Feb 16 13:48:56 crc kubenswrapper[4812]: I0216 13:48:56.992185 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h7s6\" (UniqueName: \"kubernetes.io/projected/78add67a-1f63-4b2a-88b5-39f2ef90c06e-kube-api-access-2h7s6\") pod \"frr-k8s-webhook-server-78b44bf5bb-2h7tc\" (UID: \"78add67a-1f63-4b2a-88b5-39f2ef90c06e\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-2h7tc" Feb 16 13:48:56 crc kubenswrapper[4812]: I0216 13:48:56.992239 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/7aa158c4-bd4e-46d5-92f5-8635e722a673-metrics\") pod \"frr-k8s-6zjgs\" (UID: \"7aa158c4-bd4e-46d5-92f5-8635e722a673\") " pod="metallb-system/frr-k8s-6zjgs" Feb 16 13:48:56 crc kubenswrapper[4812]: I0216 13:48:56.992267 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rvdb\" (UniqueName: \"kubernetes.io/projected/7aa158c4-bd4e-46d5-92f5-8635e722a673-kube-api-access-5rvdb\") pod \"frr-k8s-6zjgs\" (UID: \"7aa158c4-bd4e-46d5-92f5-8635e722a673\") " pod="metallb-system/frr-k8s-6zjgs" Feb 16 13:48:56 crc kubenswrapper[4812]: I0216 13:48:56.992288 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/7aa158c4-bd4e-46d5-92f5-8635e722a673-frr-conf\") pod \"frr-k8s-6zjgs\" (UID: \"7aa158c4-bd4e-46d5-92f5-8635e722a673\") " pod="metallb-system/frr-k8s-6zjgs" Feb 16 13:48:56 crc kubenswrapper[4812]: I0216 13:48:56.992312 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7aa158c4-bd4e-46d5-92f5-8635e722a673-metrics-certs\") pod \"frr-k8s-6zjgs\" (UID: \"7aa158c4-bd4e-46d5-92f5-8635e722a673\") " pod="metallb-system/frr-k8s-6zjgs" Feb 16 13:48:56 crc kubenswrapper[4812]: I0216 13:48:56.992395 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/78add67a-1f63-4b2a-88b5-39f2ef90c06e-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-2h7tc\" (UID: \"78add67a-1f63-4b2a-88b5-39f2ef90c06e\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-2h7tc" Feb 16 13:48:56 crc kubenswrapper[4812]: I0216 13:48:56.992419 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/7aa158c4-bd4e-46d5-92f5-8635e722a673-frr-sockets\") pod \"frr-k8s-6zjgs\" (UID: \"7aa158c4-bd4e-46d5-92f5-8635e722a673\") " pod="metallb-system/frr-k8s-6zjgs" Feb 16 13:48:56 crc kubenswrapper[4812]: I0216 13:48:56.992459 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/7aa158c4-bd4e-46d5-92f5-8635e722a673-frr-startup\") pod \"frr-k8s-6zjgs\" (UID: \"7aa158c4-bd4e-46d5-92f5-8635e722a673\") " pod="metallb-system/frr-k8s-6zjgs" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.039882 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-wpmzn"] Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.040989 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-wpmzn" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.042821 4812 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.042946 4812 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-q7pmw" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.043577 4812 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.044154 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.076716 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-k45w2"] Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.077919 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-k45w2" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.081833 4812 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.087365 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-k45w2"] Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.093217 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5977d87d-ec62-4a14-8df1-d1b37209d48d-cert\") pod \"controller-69bbfbf88f-k45w2\" (UID: \"5977d87d-ec62-4a14-8df1-d1b37209d48d\") " pod="metallb-system/controller-69bbfbf88f-k45w2" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.093271 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4-metrics-certs\") pod \"speaker-wpmzn\" (UID: \"c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4\") " pod="metallb-system/speaker-wpmzn" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.093322 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/7aa158c4-bd4e-46d5-92f5-8635e722a673-reloader\") pod \"frr-k8s-6zjgs\" (UID: \"7aa158c4-bd4e-46d5-92f5-8635e722a673\") " pod="metallb-system/frr-k8s-6zjgs" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.093374 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h7s6\" (UniqueName: \"kubernetes.io/projected/78add67a-1f63-4b2a-88b5-39f2ef90c06e-kube-api-access-2h7s6\") pod \"frr-k8s-webhook-server-78b44bf5bb-2h7tc\" (UID: \"78add67a-1f63-4b2a-88b5-39f2ef90c06e\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-2h7tc" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.093407 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4-metallb-excludel2\") pod \"speaker-wpmzn\" (UID: \"c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4\") " pod="metallb-system/speaker-wpmzn" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.093436 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z7dp\" (UniqueName: \"kubernetes.io/projected/c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4-kube-api-access-5z7dp\") pod \"speaker-wpmzn\" (UID: \"c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4\") " pod="metallb-system/speaker-wpmzn" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.093484 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/7aa158c4-bd4e-46d5-92f5-8635e722a673-metrics\") pod \"frr-k8s-6zjgs\" (UID: \"7aa158c4-bd4e-46d5-92f5-8635e722a673\") " pod="metallb-system/frr-k8s-6zjgs" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.093509 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rvdb\" (UniqueName: \"kubernetes.io/projected/7aa158c4-bd4e-46d5-92f5-8635e722a673-kube-api-access-5rvdb\") pod \"frr-k8s-6zjgs\" (UID: \"7aa158c4-bd4e-46d5-92f5-8635e722a673\") " pod="metallb-system/frr-k8s-6zjgs" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.093534 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/7aa158c4-bd4e-46d5-92f5-8635e722a673-frr-conf\") pod \"frr-k8s-6zjgs\" (UID: \"7aa158c4-bd4e-46d5-92f5-8635e722a673\") " pod="metallb-system/frr-k8s-6zjgs" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.093556 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7aa158c4-bd4e-46d5-92f5-8635e722a673-metrics-certs\") pod \"frr-k8s-6zjgs\" (UID: \"7aa158c4-bd4e-46d5-92f5-8635e722a673\") " pod="metallb-system/frr-k8s-6zjgs" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.093581 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4-memberlist\") pod \"speaker-wpmzn\" (UID: \"c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4\") " pod="metallb-system/speaker-wpmzn" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.093609 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5977d87d-ec62-4a14-8df1-d1b37209d48d-metrics-certs\") pod \"controller-69bbfbf88f-k45w2\" (UID: \"5977d87d-ec62-4a14-8df1-d1b37209d48d\") " pod="metallb-system/controller-69bbfbf88f-k45w2" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.093666 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/78add67a-1f63-4b2a-88b5-39f2ef90c06e-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-2h7tc\" (UID: \"78add67a-1f63-4b2a-88b5-39f2ef90c06e\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-2h7tc" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.093687 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/7aa158c4-bd4e-46d5-92f5-8635e722a673-frr-sockets\") pod \"frr-k8s-6zjgs\" (UID: \"7aa158c4-bd4e-46d5-92f5-8635e722a673\") " pod="metallb-system/frr-k8s-6zjgs" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.093720 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpxvf\" (UniqueName: \"kubernetes.io/projected/5977d87d-ec62-4a14-8df1-d1b37209d48d-kube-api-access-tpxvf\") pod \"controller-69bbfbf88f-k45w2\" (UID: \"5977d87d-ec62-4a14-8df1-d1b37209d48d\") " pod="metallb-system/controller-69bbfbf88f-k45w2" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.093749 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/7aa158c4-bd4e-46d5-92f5-8635e722a673-frr-startup\") pod \"frr-k8s-6zjgs\" (UID: \"7aa158c4-bd4e-46d5-92f5-8635e722a673\") " pod="metallb-system/frr-k8s-6zjgs" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.093824 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/7aa158c4-bd4e-46d5-92f5-8635e722a673-reloader\") pod \"frr-k8s-6zjgs\" (UID: \"7aa158c4-bd4e-46d5-92f5-8635e722a673\") " pod="metallb-system/frr-k8s-6zjgs" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.094046 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/7aa158c4-bd4e-46d5-92f5-8635e722a673-metrics\") pod \"frr-k8s-6zjgs\" (UID: \"7aa158c4-bd4e-46d5-92f5-8635e722a673\") " pod="metallb-system/frr-k8s-6zjgs" Feb 16 13:48:57 crc kubenswrapper[4812]: E0216 13:48:57.094119 4812 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 16 13:48:57 crc kubenswrapper[4812]: E0216 13:48:57.094130 4812 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Feb 16 13:48:57 crc kubenswrapper[4812]: E0216 13:48:57.094157 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7aa158c4-bd4e-46d5-92f5-8635e722a673-metrics-certs podName:7aa158c4-bd4e-46d5-92f5-8635e722a673 nodeName:}" failed. No retries permitted until 2026-02-16 13:48:57.59414301 +0000 UTC m=+1026.658473711 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7aa158c4-bd4e-46d5-92f5-8635e722a673-metrics-certs") pod "frr-k8s-6zjgs" (UID: "7aa158c4-bd4e-46d5-92f5-8635e722a673") : secret "frr-k8s-certs-secret" not found Feb 16 13:48:57 crc kubenswrapper[4812]: E0216 13:48:57.094185 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78add67a-1f63-4b2a-88b5-39f2ef90c06e-cert podName:78add67a-1f63-4b2a-88b5-39f2ef90c06e nodeName:}" failed. No retries permitted until 2026-02-16 13:48:57.594164461 +0000 UTC m=+1026.658495242 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/78add67a-1f63-4b2a-88b5-39f2ef90c06e-cert") pod "frr-k8s-webhook-server-78b44bf5bb-2h7tc" (UID: "78add67a-1f63-4b2a-88b5-39f2ef90c06e") : secret "frr-k8s-webhook-server-cert" not found Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.094951 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/7aa158c4-bd4e-46d5-92f5-8635e722a673-frr-startup\") pod \"frr-k8s-6zjgs\" (UID: \"7aa158c4-bd4e-46d5-92f5-8635e722a673\") " pod="metallb-system/frr-k8s-6zjgs" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.097839 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/7aa158c4-bd4e-46d5-92f5-8635e722a673-frr-sockets\") pod \"frr-k8s-6zjgs\" (UID: \"7aa158c4-bd4e-46d5-92f5-8635e722a673\") " pod="metallb-system/frr-k8s-6zjgs" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.097856 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/7aa158c4-bd4e-46d5-92f5-8635e722a673-frr-conf\") pod \"frr-k8s-6zjgs\" (UID: \"7aa158c4-bd4e-46d5-92f5-8635e722a673\") " pod="metallb-system/frr-k8s-6zjgs" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.119603 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rvdb\" (UniqueName: \"kubernetes.io/projected/7aa158c4-bd4e-46d5-92f5-8635e722a673-kube-api-access-5rvdb\") pod \"frr-k8s-6zjgs\" (UID: \"7aa158c4-bd4e-46d5-92f5-8635e722a673\") " pod="metallb-system/frr-k8s-6zjgs" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.136566 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h7s6\" (UniqueName: \"kubernetes.io/projected/78add67a-1f63-4b2a-88b5-39f2ef90c06e-kube-api-access-2h7s6\") pod \"frr-k8s-webhook-server-78b44bf5bb-2h7tc\" (UID: \"78add67a-1f63-4b2a-88b5-39f2ef90c06e\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-2h7tc" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.194940 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4-metallb-excludel2\") pod \"speaker-wpmzn\" (UID: \"c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4\") " pod="metallb-system/speaker-wpmzn" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.194996 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z7dp\" (UniqueName: \"kubernetes.io/projected/c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4-kube-api-access-5z7dp\") pod \"speaker-wpmzn\" (UID: \"c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4\") " pod="metallb-system/speaker-wpmzn" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.195074 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4-memberlist\") pod \"speaker-wpmzn\" (UID: \"c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4\") " pod="metallb-system/speaker-wpmzn" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.195104 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5977d87d-ec62-4a14-8df1-d1b37209d48d-metrics-certs\") pod \"controller-69bbfbf88f-k45w2\" (UID: \"5977d87d-ec62-4a14-8df1-d1b37209d48d\") " pod="metallb-system/controller-69bbfbf88f-k45w2" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.195143 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpxvf\" (UniqueName: \"kubernetes.io/projected/5977d87d-ec62-4a14-8df1-d1b37209d48d-kube-api-access-tpxvf\") pod \"controller-69bbfbf88f-k45w2\" (UID: \"5977d87d-ec62-4a14-8df1-d1b37209d48d\") " pod="metallb-system/controller-69bbfbf88f-k45w2" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.195173 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5977d87d-ec62-4a14-8df1-d1b37209d48d-cert\") pod \"controller-69bbfbf88f-k45w2\" (UID: \"5977d87d-ec62-4a14-8df1-d1b37209d48d\") " pod="metallb-system/controller-69bbfbf88f-k45w2" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.195195 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4-metrics-certs\") pod \"speaker-wpmzn\" (UID: \"c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4\") " pod="metallb-system/speaker-wpmzn" Feb 16 13:48:57 crc kubenswrapper[4812]: E0216 13:48:57.195328 4812 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Feb 16 13:48:57 crc kubenswrapper[4812]: E0216 13:48:57.195385 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4-metrics-certs podName:c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4 nodeName:}" failed. No retries permitted until 2026-02-16 13:48:57.695366717 +0000 UTC m=+1026.759697418 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4-metrics-certs") pod "speaker-wpmzn" (UID: "c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4") : secret "speaker-certs-secret" not found Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.196327 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4-metallb-excludel2\") pod \"speaker-wpmzn\" (UID: \"c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4\") " pod="metallb-system/speaker-wpmzn" Feb 16 13:48:57 crc kubenswrapper[4812]: E0216 13:48:57.196365 4812 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 16 13:48:57 crc kubenswrapper[4812]: E0216 13:48:57.196436 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4-memberlist podName:c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4 nodeName:}" failed. No retries permitted until 2026-02-16 13:48:57.696415157 +0000 UTC m=+1026.760745938 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4-memberlist") pod "speaker-wpmzn" (UID: "c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4") : secret "metallb-memberlist" not found Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.198504 4812 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.199631 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5977d87d-ec62-4a14-8df1-d1b37209d48d-metrics-certs\") pod \"controller-69bbfbf88f-k45w2\" (UID: \"5977d87d-ec62-4a14-8df1-d1b37209d48d\") " pod="metallb-system/controller-69bbfbf88f-k45w2" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.209829 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5977d87d-ec62-4a14-8df1-d1b37209d48d-cert\") pod \"controller-69bbfbf88f-k45w2\" (UID: \"5977d87d-ec62-4a14-8df1-d1b37209d48d\") " pod="metallb-system/controller-69bbfbf88f-k45w2" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.215014 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpxvf\" (UniqueName: \"kubernetes.io/projected/5977d87d-ec62-4a14-8df1-d1b37209d48d-kube-api-access-tpxvf\") pod \"controller-69bbfbf88f-k45w2\" (UID: \"5977d87d-ec62-4a14-8df1-d1b37209d48d\") " pod="metallb-system/controller-69bbfbf88f-k45w2" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.234200 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z7dp\" (UniqueName: \"kubernetes.io/projected/c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4-kube-api-access-5z7dp\") pod \"speaker-wpmzn\" (UID: \"c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4\") " pod="metallb-system/speaker-wpmzn" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.394974 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-k45w2" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.599711 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/78add67a-1f63-4b2a-88b5-39f2ef90c06e-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-2h7tc\" (UID: \"78add67a-1f63-4b2a-88b5-39f2ef90c06e\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-2h7tc" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.599830 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7aa158c4-bd4e-46d5-92f5-8635e722a673-metrics-certs\") pod \"frr-k8s-6zjgs\" (UID: \"7aa158c4-bd4e-46d5-92f5-8635e722a673\") " pod="metallb-system/frr-k8s-6zjgs" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.605138 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7aa158c4-bd4e-46d5-92f5-8635e722a673-metrics-certs\") pod \"frr-k8s-6zjgs\" (UID: \"7aa158c4-bd4e-46d5-92f5-8635e722a673\") " pod="metallb-system/frr-k8s-6zjgs" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.605345 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/78add67a-1f63-4b2a-88b5-39f2ef90c06e-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-2h7tc\" (UID: \"78add67a-1f63-4b2a-88b5-39f2ef90c06e\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-2h7tc" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.701035 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4-memberlist\") pod \"speaker-wpmzn\" (UID: \"c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4\") " pod="metallb-system/speaker-wpmzn" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.701102 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4-metrics-certs\") pod \"speaker-wpmzn\" (UID: \"c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4\") " pod="metallb-system/speaker-wpmzn" Feb 16 13:48:57 crc kubenswrapper[4812]: E0216 13:48:57.701229 4812 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 16 13:48:57 crc kubenswrapper[4812]: E0216 13:48:57.701314 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4-memberlist podName:c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4 nodeName:}" failed. No retries permitted until 2026-02-16 13:48:58.701296903 +0000 UTC m=+1027.765627604 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4-memberlist") pod "speaker-wpmzn" (UID: "c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4") : secret "metallb-memberlist" not found Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.704024 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4-metrics-certs\") pod \"speaker-wpmzn\" (UID: \"c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4\") " pod="metallb-system/speaker-wpmzn" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.802903 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-k45w2"] Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.839125 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-6zjgs" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.851858 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-2h7tc" Feb 16 13:48:57 crc kubenswrapper[4812]: I0216 13:48:57.909390 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-k45w2" event={"ID":"5977d87d-ec62-4a14-8df1-d1b37209d48d","Type":"ContainerStarted","Data":"3438aadf12ca318c0797e2805c552795bbac395e375b6ce372af21cad8a7c50f"} Feb 16 13:48:58 crc kubenswrapper[4812]: I0216 13:48:58.087963 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-2h7tc"] Feb 16 13:48:58 crc kubenswrapper[4812]: I0216 13:48:58.716544 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4-memberlist\") pod \"speaker-wpmzn\" (UID: \"c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4\") " pod="metallb-system/speaker-wpmzn" Feb 16 13:48:58 crc kubenswrapper[4812]: I0216 13:48:58.728494 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4-memberlist\") pod \"speaker-wpmzn\" (UID: \"c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4\") " pod="metallb-system/speaker-wpmzn" Feb 16 13:48:58 crc kubenswrapper[4812]: I0216 13:48:58.855543 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-wpmzn" Feb 16 13:48:58 crc kubenswrapper[4812]: W0216 13:48:58.883381 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3b5d645_9c9d_48e5_aeb1_9a3dcd39c0a4.slice/crio-75be00974c55da2ee57218964407ff4399c4ae664004fef78d4dc1cae73dd948 WatchSource:0}: Error finding container 75be00974c55da2ee57218964407ff4399c4ae664004fef78d4dc1cae73dd948: Status 404 returned error can't find the container with id 75be00974c55da2ee57218964407ff4399c4ae664004fef78d4dc1cae73dd948 Feb 16 13:48:58 crc kubenswrapper[4812]: I0216 13:48:58.920524 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wpmzn" event={"ID":"c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4","Type":"ContainerStarted","Data":"75be00974c55da2ee57218964407ff4399c4ae664004fef78d4dc1cae73dd948"} Feb 16 13:48:58 crc kubenswrapper[4812]: I0216 13:48:58.922136 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-2h7tc" event={"ID":"78add67a-1f63-4b2a-88b5-39f2ef90c06e","Type":"ContainerStarted","Data":"a54ef457687cb1fc5ce5d8d4bcaae04ab5657461221c119bc263e966bb06b94a"} Feb 16 13:48:58 crc kubenswrapper[4812]: I0216 13:48:58.924241 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-k45w2" event={"ID":"5977d87d-ec62-4a14-8df1-d1b37209d48d","Type":"ContainerStarted","Data":"05a92768cad1b1a20a9abc4f9562ceefeb466f9dc3fbff852c99a3dba86a74a7"} Feb 16 13:48:58 crc kubenswrapper[4812]: I0216 13:48:58.924266 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-k45w2" event={"ID":"5977d87d-ec62-4a14-8df1-d1b37209d48d","Type":"ContainerStarted","Data":"7fb9baacb1f4fa4471a7fd22c7912339bad70bcba11a07dcfb4096d108254066"} Feb 16 13:48:58 crc kubenswrapper[4812]: I0216 13:48:58.924379 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-k45w2" Feb 16 13:48:58 crc kubenswrapper[4812]: I0216 13:48:58.925420 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6zjgs" event={"ID":"7aa158c4-bd4e-46d5-92f5-8635e722a673","Type":"ContainerStarted","Data":"947d906f307ac8c11df763a8c5b3306af933413e5e18c7cf491fb0acd10bf1ab"} Feb 16 13:48:58 crc kubenswrapper[4812]: I0216 13:48:58.947861 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-k45w2" podStartSLOduration=1.9478242300000002 podStartE2EDuration="1.94782423s" podCreationTimestamp="2026-02-16 13:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:48:58.943925366 +0000 UTC m=+1028.008256097" watchObservedRunningTime="2026-02-16 13:48:58.94782423 +0000 UTC m=+1028.012154931" Feb 16 13:48:59 crc kubenswrapper[4812]: I0216 13:48:59.939250 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wpmzn" event={"ID":"c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4","Type":"ContainerStarted","Data":"077c1376cfd84b7def5648b97791d1e1e03e3917dd4c5866179c80646e75bdac"} Feb 16 13:48:59 crc kubenswrapper[4812]: I0216 13:48:59.939566 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wpmzn" event={"ID":"c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4","Type":"ContainerStarted","Data":"b5d0371f7b3c56274cfdd329ca0d8212a81f57ce61679807e381976ee2040f1d"} Feb 16 13:49:00 crc kubenswrapper[4812]: I0216 13:49:00.947553 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-wpmzn" Feb 16 13:49:01 crc kubenswrapper[4812]: I0216 13:49:01.903597 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-wpmzn" podStartSLOduration=4.90357866 podStartE2EDuration="4.90357866s" podCreationTimestamp="2026-02-16 13:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:48:59.95779014 +0000 UTC m=+1029.022120841" watchObservedRunningTime="2026-02-16 13:49:01.90357866 +0000 UTC m=+1030.967909361" Feb 16 13:49:07 crc kubenswrapper[4812]: I0216 13:49:07.401545 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-k45w2" Feb 16 13:49:08 crc kubenswrapper[4812]: I0216 13:49:08.059340 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-2h7tc" event={"ID":"78add67a-1f63-4b2a-88b5-39f2ef90c06e","Type":"ContainerStarted","Data":"5074242e2278d7f4c01bef39697ba308e759f9fe1d0662d644f92a3f8ab0d1f9"} Feb 16 13:49:08 crc kubenswrapper[4812]: I0216 13:49:08.060635 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-2h7tc" Feb 16 13:49:08 crc kubenswrapper[4812]: I0216 13:49:08.062303 4812 generic.go:334] "Generic (PLEG): container finished" podID="7aa158c4-bd4e-46d5-92f5-8635e722a673" containerID="0db39695dc1bf4dbc9cf3ea67246121ea56c3740c89f479f3e891aeadcb9c454" exitCode=0 Feb 16 13:49:08 crc kubenswrapper[4812]: I0216 13:49:08.062461 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6zjgs" event={"ID":"7aa158c4-bd4e-46d5-92f5-8635e722a673","Type":"ContainerDied","Data":"0db39695dc1bf4dbc9cf3ea67246121ea56c3740c89f479f3e891aeadcb9c454"} Feb 16 13:49:08 crc kubenswrapper[4812]: I0216 13:49:08.081184 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-2h7tc" podStartSLOduration=2.814767847 podStartE2EDuration="12.081163599s" podCreationTimestamp="2026-02-16 13:48:56 +0000 UTC" firstStartedPulling="2026-02-16 13:48:58.095613172 +0000 UTC m=+1027.159943863" lastFinishedPulling="2026-02-16 13:49:07.362008914 +0000 UTC m=+1036.426339615" observedRunningTime="2026-02-16 13:49:08.074705171 +0000 UTC m=+1037.139035872" watchObservedRunningTime="2026-02-16 13:49:08.081163599 +0000 UTC m=+1037.145494300" Feb 16 13:49:09 crc kubenswrapper[4812]: I0216 13:49:09.080220 4812 generic.go:334] "Generic (PLEG): container finished" podID="7aa158c4-bd4e-46d5-92f5-8635e722a673" containerID="af05145b3e3b621f43932579b445b9e4c692963e9341a12b2561c87291d9c1b6" exitCode=0 Feb 16 13:49:09 crc kubenswrapper[4812]: I0216 13:49:09.080341 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6zjgs" event={"ID":"7aa158c4-bd4e-46d5-92f5-8635e722a673","Type":"ContainerDied","Data":"af05145b3e3b621f43932579b445b9e4c692963e9341a12b2561c87291d9c1b6"} Feb 16 13:49:10 crc kubenswrapper[4812]: I0216 13:49:10.088171 4812 generic.go:334] "Generic (PLEG): container finished" podID="7aa158c4-bd4e-46d5-92f5-8635e722a673" containerID="b07b4b032ef960f9c18480190840f273c76aced4c8a96fc4469efad5518381fc" exitCode=0 Feb 16 13:49:10 crc kubenswrapper[4812]: I0216 13:49:10.088268 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6zjgs" event={"ID":"7aa158c4-bd4e-46d5-92f5-8635e722a673","Type":"ContainerDied","Data":"b07b4b032ef960f9c18480190840f273c76aced4c8a96fc4469efad5518381fc"} Feb 16 13:49:11 crc kubenswrapper[4812]: I0216 13:49:11.107723 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6zjgs" event={"ID":"7aa158c4-bd4e-46d5-92f5-8635e722a673","Type":"ContainerStarted","Data":"76dcb6963d245875261177e996292fa9e050ae92943dbd87d936806e8fcdbb38"} Feb 16 13:49:11 crc kubenswrapper[4812]: I0216 13:49:11.108023 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6zjgs" event={"ID":"7aa158c4-bd4e-46d5-92f5-8635e722a673","Type":"ContainerStarted","Data":"e47bbbffba0f2483714db5505020ccdcc85dd9e263ce27bbb58697794006f720"} Feb 16 13:49:11 crc kubenswrapper[4812]: I0216 13:49:11.108035 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6zjgs" event={"ID":"7aa158c4-bd4e-46d5-92f5-8635e722a673","Type":"ContainerStarted","Data":"64ab1b478ef9efb5d4c9fff27d8ad93b23fb2af3dd6cf0f3951b753f074ed8d9"} Feb 16 13:49:11 crc kubenswrapper[4812]: I0216 13:49:11.108045 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6zjgs" event={"ID":"7aa158c4-bd4e-46d5-92f5-8635e722a673","Type":"ContainerStarted","Data":"bc394eb213a5f64c9714a4062196bd5cc371a1bbd84527be572f3d5b6736ba73"} Feb 16 13:49:11 crc kubenswrapper[4812]: I0216 13:49:11.108054 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6zjgs" event={"ID":"7aa158c4-bd4e-46d5-92f5-8635e722a673","Type":"ContainerStarted","Data":"237c9a1342eac394f27e274fefcb5db80f3cd828ce6467ed1e9546f70987725f"} Feb 16 13:49:12 crc kubenswrapper[4812]: I0216 13:49:12.119619 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-6zjgs" event={"ID":"7aa158c4-bd4e-46d5-92f5-8635e722a673","Type":"ContainerStarted","Data":"ade34d4af185557528f7b15a202b5d9da1c9927cca4c16af9052919de09e1224"} Feb 16 13:49:12 crc kubenswrapper[4812]: I0216 13:49:12.120459 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-6zjgs" Feb 16 13:49:12 crc kubenswrapper[4812]: I0216 13:49:12.148282 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-6zjgs" podStartSLOduration=6.784379191 podStartE2EDuration="16.14823881s" podCreationTimestamp="2026-02-16 13:48:56 +0000 UTC" firstStartedPulling="2026-02-16 13:48:57.988471223 +0000 UTC m=+1027.052801924" lastFinishedPulling="2026-02-16 13:49:07.352330842 +0000 UTC m=+1036.416661543" observedRunningTime="2026-02-16 13:49:12.144198394 +0000 UTC m=+1041.208529095" watchObservedRunningTime="2026-02-16 13:49:12.14823881 +0000 UTC m=+1041.212569511" Feb 16 13:49:12 crc kubenswrapper[4812]: I0216 13:49:12.840197 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-6zjgs" Feb 16 13:49:12 crc kubenswrapper[4812]: I0216 13:49:12.882608 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-6zjgs" Feb 16 13:49:17 crc kubenswrapper[4812]: I0216 13:49:17.855753 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-2h7tc" Feb 16 13:49:18 crc kubenswrapper[4812]: I0216 13:49:18.859571 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-wpmzn" Feb 16 13:49:21 crc kubenswrapper[4812]: I0216 13:49:21.991084 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-94npd"] Feb 16 13:49:21 crc kubenswrapper[4812]: I0216 13:49:21.992123 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-94npd" Feb 16 13:49:21 crc kubenswrapper[4812]: I0216 13:49:21.994576 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-2jjlc" Feb 16 13:49:21 crc kubenswrapper[4812]: I0216 13:49:21.994809 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 16 13:49:21 crc kubenswrapper[4812]: I0216 13:49:21.996054 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 16 13:49:21 crc kubenswrapper[4812]: I0216 13:49:21.999036 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-94npd"] Feb 16 13:49:22 crc kubenswrapper[4812]: I0216 13:49:22.010076 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjhwb\" (UniqueName: \"kubernetes.io/projected/96b9a73b-66d4-4a14-b5e6-76759c41ef43-kube-api-access-zjhwb\") pod \"openstack-operator-index-94npd\" (UID: \"96b9a73b-66d4-4a14-b5e6-76759c41ef43\") " pod="openstack-operators/openstack-operator-index-94npd" Feb 16 13:49:22 crc kubenswrapper[4812]: I0216 13:49:22.112107 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjhwb\" (UniqueName: \"kubernetes.io/projected/96b9a73b-66d4-4a14-b5e6-76759c41ef43-kube-api-access-zjhwb\") pod \"openstack-operator-index-94npd\" (UID: \"96b9a73b-66d4-4a14-b5e6-76759c41ef43\") " pod="openstack-operators/openstack-operator-index-94npd" Feb 16 13:49:22 crc kubenswrapper[4812]: I0216 13:49:22.131290 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjhwb\" (UniqueName: \"kubernetes.io/projected/96b9a73b-66d4-4a14-b5e6-76759c41ef43-kube-api-access-zjhwb\") pod \"openstack-operator-index-94npd\" (UID: \"96b9a73b-66d4-4a14-b5e6-76759c41ef43\") " pod="openstack-operators/openstack-operator-index-94npd" Feb 16 13:49:22 crc kubenswrapper[4812]: I0216 13:49:22.307837 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-94npd" Feb 16 13:49:23 crc kubenswrapper[4812]: W0216 13:49:23.051777 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96b9a73b_66d4_4a14_b5e6_76759c41ef43.slice/crio-9fa020f787ed112b09338f78074a115b59eb4060ff8c1f8aaa4986d42569367e WatchSource:0}: Error finding container 9fa020f787ed112b09338f78074a115b59eb4060ff8c1f8aaa4986d42569367e: Status 404 returned error can't find the container with id 9fa020f787ed112b09338f78074a115b59eb4060ff8c1f8aaa4986d42569367e Feb 16 13:49:23 crc kubenswrapper[4812]: I0216 13:49:23.051911 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-94npd"] Feb 16 13:49:23 crc kubenswrapper[4812]: I0216 13:49:23.054762 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 13:49:23 crc kubenswrapper[4812]: I0216 13:49:23.191413 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-94npd" event={"ID":"96b9a73b-66d4-4a14-b5e6-76759c41ef43","Type":"ContainerStarted","Data":"9fa020f787ed112b09338f78074a115b59eb4060ff8c1f8aaa4986d42569367e"} Feb 16 13:49:25 crc kubenswrapper[4812]: I0216 13:49:25.382343 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-94npd"] Feb 16 13:49:25 crc kubenswrapper[4812]: I0216 13:49:25.977249 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-fgk9h"] Feb 16 13:49:25 crc kubenswrapper[4812]: I0216 13:49:25.978062 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fgk9h" Feb 16 13:49:25 crc kubenswrapper[4812]: I0216 13:49:25.994164 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-fgk9h"] Feb 16 13:49:26 crc kubenswrapper[4812]: I0216 13:49:26.173997 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn9wx\" (UniqueName: \"kubernetes.io/projected/e649f9b1-93d2-4d2d-abeb-a67d78038fd9-kube-api-access-dn9wx\") pod \"openstack-operator-index-fgk9h\" (UID: \"e649f9b1-93d2-4d2d-abeb-a67d78038fd9\") " pod="openstack-operators/openstack-operator-index-fgk9h" Feb 16 13:49:26 crc kubenswrapper[4812]: I0216 13:49:26.214070 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-94npd" event={"ID":"96b9a73b-66d4-4a14-b5e6-76759c41ef43","Type":"ContainerStarted","Data":"bd7abee72f7e72a689809482d5f5953c1caae4b01e1a1291320622fdccebd923"} Feb 16 13:49:26 crc kubenswrapper[4812]: I0216 13:49:26.214274 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-94npd" podUID="96b9a73b-66d4-4a14-b5e6-76759c41ef43" containerName="registry-server" containerID="cri-o://bd7abee72f7e72a689809482d5f5953c1caae4b01e1a1291320622fdccebd923" gracePeriod=2 Feb 16 13:49:26 crc kubenswrapper[4812]: I0216 13:49:26.237505 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-94npd" podStartSLOduration=2.474774148 podStartE2EDuration="5.237484175s" podCreationTimestamp="2026-02-16 13:49:21 +0000 UTC" firstStartedPulling="2026-02-16 13:49:23.054436784 +0000 UTC m=+1052.118767485" lastFinishedPulling="2026-02-16 13:49:25.817146811 +0000 UTC m=+1054.881477512" observedRunningTime="2026-02-16 13:49:26.230853924 +0000 UTC m=+1055.295184615" watchObservedRunningTime="2026-02-16 13:49:26.237484175 +0000 UTC m=+1055.301814886" Feb 16 13:49:26 crc kubenswrapper[4812]: I0216 13:49:26.275241 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn9wx\" (UniqueName: \"kubernetes.io/projected/e649f9b1-93d2-4d2d-abeb-a67d78038fd9-kube-api-access-dn9wx\") pod \"openstack-operator-index-fgk9h\" (UID: \"e649f9b1-93d2-4d2d-abeb-a67d78038fd9\") " pod="openstack-operators/openstack-operator-index-fgk9h" Feb 16 13:49:26 crc kubenswrapper[4812]: I0216 13:49:26.301063 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn9wx\" (UniqueName: \"kubernetes.io/projected/e649f9b1-93d2-4d2d-abeb-a67d78038fd9-kube-api-access-dn9wx\") pod \"openstack-operator-index-fgk9h\" (UID: \"e649f9b1-93d2-4d2d-abeb-a67d78038fd9\") " pod="openstack-operators/openstack-operator-index-fgk9h" Feb 16 13:49:26 crc kubenswrapper[4812]: I0216 13:49:26.604789 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fgk9h" Feb 16 13:49:26 crc kubenswrapper[4812]: I0216 13:49:26.954195 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-94npd" Feb 16 13:49:27 crc kubenswrapper[4812]: I0216 13:49:27.085242 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjhwb\" (UniqueName: \"kubernetes.io/projected/96b9a73b-66d4-4a14-b5e6-76759c41ef43-kube-api-access-zjhwb\") pod \"96b9a73b-66d4-4a14-b5e6-76759c41ef43\" (UID: \"96b9a73b-66d4-4a14-b5e6-76759c41ef43\") " Feb 16 13:49:27 crc kubenswrapper[4812]: I0216 13:49:27.090822 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b9a73b-66d4-4a14-b5e6-76759c41ef43-kube-api-access-zjhwb" (OuterVolumeSpecName: "kube-api-access-zjhwb") pod "96b9a73b-66d4-4a14-b5e6-76759c41ef43" (UID: "96b9a73b-66d4-4a14-b5e6-76759c41ef43"). InnerVolumeSpecName "kube-api-access-zjhwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:49:27 crc kubenswrapper[4812]: I0216 13:49:27.186750 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjhwb\" (UniqueName: \"kubernetes.io/projected/96b9a73b-66d4-4a14-b5e6-76759c41ef43-kube-api-access-zjhwb\") on node \"crc\" DevicePath \"\"" Feb 16 13:49:27 crc kubenswrapper[4812]: I0216 13:49:27.222538 4812 generic.go:334] "Generic (PLEG): container finished" podID="96b9a73b-66d4-4a14-b5e6-76759c41ef43" containerID="bd7abee72f7e72a689809482d5f5953c1caae4b01e1a1291320622fdccebd923" exitCode=0 Feb 16 13:49:27 crc kubenswrapper[4812]: I0216 13:49:27.222588 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-94npd" event={"ID":"96b9a73b-66d4-4a14-b5e6-76759c41ef43","Type":"ContainerDied","Data":"bd7abee72f7e72a689809482d5f5953c1caae4b01e1a1291320622fdccebd923"} Feb 16 13:49:27 crc kubenswrapper[4812]: I0216 13:49:27.222626 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-94npd" event={"ID":"96b9a73b-66d4-4a14-b5e6-76759c41ef43","Type":"ContainerDied","Data":"9fa020f787ed112b09338f78074a115b59eb4060ff8c1f8aaa4986d42569367e"} Feb 16 13:49:27 crc kubenswrapper[4812]: I0216 13:49:27.222644 4812 scope.go:117] "RemoveContainer" containerID="bd7abee72f7e72a689809482d5f5953c1caae4b01e1a1291320622fdccebd923" Feb 16 13:49:27 crc kubenswrapper[4812]: I0216 13:49:27.222648 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-94npd" Feb 16 13:49:27 crc kubenswrapper[4812]: I0216 13:49:27.248327 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-94npd"] Feb 16 13:49:27 crc kubenswrapper[4812]: I0216 13:49:27.248403 4812 scope.go:117] "RemoveContainer" containerID="bd7abee72f7e72a689809482d5f5953c1caae4b01e1a1291320622fdccebd923" Feb 16 13:49:27 crc kubenswrapper[4812]: E0216 13:49:27.248870 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd7abee72f7e72a689809482d5f5953c1caae4b01e1a1291320622fdccebd923\": container with ID starting with bd7abee72f7e72a689809482d5f5953c1caae4b01e1a1291320622fdccebd923 not found: ID does not exist" containerID="bd7abee72f7e72a689809482d5f5953c1caae4b01e1a1291320622fdccebd923" Feb 16 13:49:27 crc kubenswrapper[4812]: I0216 13:49:27.248913 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd7abee72f7e72a689809482d5f5953c1caae4b01e1a1291320622fdccebd923"} err="failed to get container status \"bd7abee72f7e72a689809482d5f5953c1caae4b01e1a1291320622fdccebd923\": rpc error: code = NotFound desc = could not find container \"bd7abee72f7e72a689809482d5f5953c1caae4b01e1a1291320622fdccebd923\": container with ID starting with bd7abee72f7e72a689809482d5f5953c1caae4b01e1a1291320622fdccebd923 not found: ID does not exist" Feb 16 13:49:27 crc kubenswrapper[4812]: I0216 13:49:27.254687 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-94npd"] Feb 16 13:49:27 crc kubenswrapper[4812]: I0216 13:49:27.276911 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-fgk9h"] Feb 16 13:49:27 crc kubenswrapper[4812]: W0216 13:49:27.283306 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode649f9b1_93d2_4d2d_abeb_a67d78038fd9.slice/crio-4d507736ca3ea29c97babe77c8f8b63a5b2d9b873a2edcc3ef5c9d068105726f WatchSource:0}: Error finding container 4d507736ca3ea29c97babe77c8f8b63a5b2d9b873a2edcc3ef5c9d068105726f: Status 404 returned error can't find the container with id 4d507736ca3ea29c97babe77c8f8b63a5b2d9b873a2edcc3ef5c9d068105726f Feb 16 13:49:27 crc kubenswrapper[4812]: I0216 13:49:27.845919 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-6zjgs" Feb 16 13:49:27 crc kubenswrapper[4812]: I0216 13:49:27.887401 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b9a73b-66d4-4a14-b5e6-76759c41ef43" path="/var/lib/kubelet/pods/96b9a73b-66d4-4a14-b5e6-76759c41ef43/volumes" Feb 16 13:49:28 crc kubenswrapper[4812]: I0216 13:49:28.231146 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fgk9h" event={"ID":"e649f9b1-93d2-4d2d-abeb-a67d78038fd9","Type":"ContainerStarted","Data":"b72e9bc4cb2fb2a9f068c53b1ec2be20e2fc2d7faccf24e6020a48191da82c97"} Feb 16 13:49:28 crc kubenswrapper[4812]: I0216 13:49:28.231187 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fgk9h" event={"ID":"e649f9b1-93d2-4d2d-abeb-a67d78038fd9","Type":"ContainerStarted","Data":"4d507736ca3ea29c97babe77c8f8b63a5b2d9b873a2edcc3ef5c9d068105726f"} Feb 16 13:49:28 crc kubenswrapper[4812]: I0216 13:49:28.250481 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-fgk9h" podStartSLOduration=3.205065024 podStartE2EDuration="3.250433712s" podCreationTimestamp="2026-02-16 13:49:25 +0000 UTC" firstStartedPulling="2026-02-16 13:49:27.28688102 +0000 UTC m=+1056.351211721" lastFinishedPulling="2026-02-16 13:49:27.332249708 +0000 UTC m=+1056.396580409" observedRunningTime="2026-02-16 13:49:28.247809556 +0000 UTC m=+1057.312140257" watchObservedRunningTime="2026-02-16 13:49:28.250433712 +0000 UTC m=+1057.314764413" Feb 16 13:49:36 crc kubenswrapper[4812]: I0216 13:49:36.604996 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-fgk9h" Feb 16 13:49:36 crc kubenswrapper[4812]: I0216 13:49:36.606571 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-fgk9h" Feb 16 13:49:36 crc kubenswrapper[4812]: I0216 13:49:36.638515 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-fgk9h" Feb 16 13:49:37 crc kubenswrapper[4812]: I0216 13:49:37.501115 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-fgk9h" Feb 16 13:49:43 crc kubenswrapper[4812]: I0216 13:49:43.347605 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ec0b6c29d1e9466755884cac70f48ead9fe1ee06d1693958ffff3251182nwd8"] Feb 16 13:49:43 crc kubenswrapper[4812]: E0216 13:49:43.349308 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96b9a73b-66d4-4a14-b5e6-76759c41ef43" containerName="registry-server" Feb 16 13:49:43 crc kubenswrapper[4812]: I0216 13:49:43.349411 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="96b9a73b-66d4-4a14-b5e6-76759c41ef43" containerName="registry-server" Feb 16 13:49:43 crc kubenswrapper[4812]: I0216 13:49:43.349667 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="96b9a73b-66d4-4a14-b5e6-76759c41ef43" containerName="registry-server" Feb 16 13:49:43 crc kubenswrapper[4812]: I0216 13:49:43.350891 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ec0b6c29d1e9466755884cac70f48ead9fe1ee06d1693958ffff3251182nwd8" Feb 16 13:49:43 crc kubenswrapper[4812]: I0216 13:49:43.353619 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-hcmnc" Feb 16 13:49:43 crc kubenswrapper[4812]: I0216 13:49:43.386000 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ec0b6c29d1e9466755884cac70f48ead9fe1ee06d1693958ffff3251182nwd8"] Feb 16 13:49:43 crc kubenswrapper[4812]: I0216 13:49:43.479333 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2-bundle\") pod \"ec0b6c29d1e9466755884cac70f48ead9fe1ee06d1693958ffff3251182nwd8\" (UID: \"a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2\") " pod="openstack-operators/ec0b6c29d1e9466755884cac70f48ead9fe1ee06d1693958ffff3251182nwd8" Feb 16 13:49:43 crc kubenswrapper[4812]: I0216 13:49:43.479393 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9lsw\" (UniqueName: \"kubernetes.io/projected/a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2-kube-api-access-l9lsw\") pod \"ec0b6c29d1e9466755884cac70f48ead9fe1ee06d1693958ffff3251182nwd8\" (UID: \"a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2\") " pod="openstack-operators/ec0b6c29d1e9466755884cac70f48ead9fe1ee06d1693958ffff3251182nwd8" Feb 16 13:49:43 crc kubenswrapper[4812]: I0216 13:49:43.479486 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2-util\") pod \"ec0b6c29d1e9466755884cac70f48ead9fe1ee06d1693958ffff3251182nwd8\" (UID: \"a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2\") " pod="openstack-operators/ec0b6c29d1e9466755884cac70f48ead9fe1ee06d1693958ffff3251182nwd8" Feb 16 13:49:43 crc kubenswrapper[4812]: I0216 13:49:43.580470 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9lsw\" (UniqueName: \"kubernetes.io/projected/a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2-kube-api-access-l9lsw\") pod \"ec0b6c29d1e9466755884cac70f48ead9fe1ee06d1693958ffff3251182nwd8\" (UID: \"a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2\") " pod="openstack-operators/ec0b6c29d1e9466755884cac70f48ead9fe1ee06d1693958ffff3251182nwd8" Feb 16 13:49:43 crc kubenswrapper[4812]: I0216 13:49:43.580571 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2-util\") pod \"ec0b6c29d1e9466755884cac70f48ead9fe1ee06d1693958ffff3251182nwd8\" (UID: \"a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2\") " pod="openstack-operators/ec0b6c29d1e9466755884cac70f48ead9fe1ee06d1693958ffff3251182nwd8" Feb 16 13:49:43 crc kubenswrapper[4812]: I0216 13:49:43.580636 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2-bundle\") pod \"ec0b6c29d1e9466755884cac70f48ead9fe1ee06d1693958ffff3251182nwd8\" (UID: \"a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2\") " pod="openstack-operators/ec0b6c29d1e9466755884cac70f48ead9fe1ee06d1693958ffff3251182nwd8" Feb 16 13:49:43 crc kubenswrapper[4812]: I0216 13:49:43.581266 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2-util\") pod \"ec0b6c29d1e9466755884cac70f48ead9fe1ee06d1693958ffff3251182nwd8\" (UID: \"a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2\") " pod="openstack-operators/ec0b6c29d1e9466755884cac70f48ead9fe1ee06d1693958ffff3251182nwd8" Feb 16 13:49:43 crc kubenswrapper[4812]: I0216 13:49:43.581356 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2-bundle\") pod \"ec0b6c29d1e9466755884cac70f48ead9fe1ee06d1693958ffff3251182nwd8\" (UID: \"a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2\") " pod="openstack-operators/ec0b6c29d1e9466755884cac70f48ead9fe1ee06d1693958ffff3251182nwd8" Feb 16 13:49:43 crc kubenswrapper[4812]: I0216 13:49:43.606723 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9lsw\" (UniqueName: \"kubernetes.io/projected/a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2-kube-api-access-l9lsw\") pod \"ec0b6c29d1e9466755884cac70f48ead9fe1ee06d1693958ffff3251182nwd8\" (UID: \"a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2\") " pod="openstack-operators/ec0b6c29d1e9466755884cac70f48ead9fe1ee06d1693958ffff3251182nwd8" Feb 16 13:49:43 crc kubenswrapper[4812]: I0216 13:49:43.669277 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ec0b6c29d1e9466755884cac70f48ead9fe1ee06d1693958ffff3251182nwd8" Feb 16 13:49:43 crc kubenswrapper[4812]: W0216 13:49:43.882802 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6aa3c82_ffc4_4a8f_8ab7_5e4b32ee90b2.slice/crio-b30b0a4cdec3ae6833c634902f3217f04949735a83abb8ef5c7fd9e39a2a1936 WatchSource:0}: Error finding container b30b0a4cdec3ae6833c634902f3217f04949735a83abb8ef5c7fd9e39a2a1936: Status 404 returned error can't find the container with id b30b0a4cdec3ae6833c634902f3217f04949735a83abb8ef5c7fd9e39a2a1936 Feb 16 13:49:43 crc kubenswrapper[4812]: I0216 13:49:43.895769 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ec0b6c29d1e9466755884cac70f48ead9fe1ee06d1693958ffff3251182nwd8"] Feb 16 13:49:44 crc kubenswrapper[4812]: I0216 13:49:44.671169 4812 generic.go:334] "Generic (PLEG): container finished" podID="a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2" containerID="223ef906a6c1227aa3004d053166f1c93af5a8384ff597256b4dd8882ea6479e" exitCode=0 Feb 16 13:49:44 crc kubenswrapper[4812]: I0216 13:49:44.671227 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ec0b6c29d1e9466755884cac70f48ead9fe1ee06d1693958ffff3251182nwd8" event={"ID":"a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2","Type":"ContainerDied","Data":"223ef906a6c1227aa3004d053166f1c93af5a8384ff597256b4dd8882ea6479e"} Feb 16 13:49:44 crc kubenswrapper[4812]: I0216 13:49:44.671484 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ec0b6c29d1e9466755884cac70f48ead9fe1ee06d1693958ffff3251182nwd8" event={"ID":"a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2","Type":"ContainerStarted","Data":"b30b0a4cdec3ae6833c634902f3217f04949735a83abb8ef5c7fd9e39a2a1936"} Feb 16 13:49:45 crc kubenswrapper[4812]: I0216 13:49:45.678859 4812 generic.go:334] "Generic (PLEG): container finished" podID="a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2" containerID="c08d58b37033440b0fc90ba1fcfd5c48dcac0be1871911cbd7f8bf891f2c4c76" exitCode=0 Feb 16 13:49:45 crc kubenswrapper[4812]: I0216 13:49:45.679069 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ec0b6c29d1e9466755884cac70f48ead9fe1ee06d1693958ffff3251182nwd8" event={"ID":"a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2","Type":"ContainerDied","Data":"c08d58b37033440b0fc90ba1fcfd5c48dcac0be1871911cbd7f8bf891f2c4c76"} Feb 16 13:49:46 crc kubenswrapper[4812]: I0216 13:49:46.687897 4812 generic.go:334] "Generic (PLEG): container finished" podID="a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2" containerID="1bd9263feb05bde99d1d006e41cbb3b988205776a06434bd0d45fa3523b5359a" exitCode=0 Feb 16 13:49:46 crc kubenswrapper[4812]: I0216 13:49:46.687951 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ec0b6c29d1e9466755884cac70f48ead9fe1ee06d1693958ffff3251182nwd8" event={"ID":"a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2","Type":"ContainerDied","Data":"1bd9263feb05bde99d1d006e41cbb3b988205776a06434bd0d45fa3523b5359a"} Feb 16 13:49:47 crc kubenswrapper[4812]: I0216 13:49:47.956427 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ec0b6c29d1e9466755884cac70f48ead9fe1ee06d1693958ffff3251182nwd8" Feb 16 13:49:48 crc kubenswrapper[4812]: I0216 13:49:48.043654 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9lsw\" (UniqueName: \"kubernetes.io/projected/a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2-kube-api-access-l9lsw\") pod \"a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2\" (UID: \"a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2\") " Feb 16 13:49:48 crc kubenswrapper[4812]: I0216 13:49:48.043760 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2-bundle\") pod \"a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2\" (UID: \"a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2\") " Feb 16 13:49:48 crc kubenswrapper[4812]: I0216 13:49:48.043825 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2-util\") pod \"a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2\" (UID: \"a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2\") " Feb 16 13:49:48 crc kubenswrapper[4812]: I0216 13:49:48.044655 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2-bundle" (OuterVolumeSpecName: "bundle") pod "a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2" (UID: "a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:49:48 crc kubenswrapper[4812]: I0216 13:49:48.057757 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2-util" (OuterVolumeSpecName: "util") pod "a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2" (UID: "a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:49:48 crc kubenswrapper[4812]: I0216 13:49:48.057857 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2-kube-api-access-l9lsw" (OuterVolumeSpecName: "kube-api-access-l9lsw") pod "a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2" (UID: "a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2"). InnerVolumeSpecName "kube-api-access-l9lsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:49:48 crc kubenswrapper[4812]: I0216 13:49:48.145371 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9lsw\" (UniqueName: \"kubernetes.io/projected/a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2-kube-api-access-l9lsw\") on node \"crc\" DevicePath \"\"" Feb 16 13:49:48 crc kubenswrapper[4812]: I0216 13:49:48.145426 4812 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:49:48 crc kubenswrapper[4812]: I0216 13:49:48.145437 4812 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2-util\") on node \"crc\" DevicePath \"\"" Feb 16 13:49:48 crc kubenswrapper[4812]: I0216 13:49:48.704297 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ec0b6c29d1e9466755884cac70f48ead9fe1ee06d1693958ffff3251182nwd8" event={"ID":"a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2","Type":"ContainerDied","Data":"b30b0a4cdec3ae6833c634902f3217f04949735a83abb8ef5c7fd9e39a2a1936"} Feb 16 13:49:48 crc kubenswrapper[4812]: I0216 13:49:48.704343 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b30b0a4cdec3ae6833c634902f3217f04949735a83abb8ef5c7fd9e39a2a1936" Feb 16 13:49:48 crc kubenswrapper[4812]: I0216 13:49:48.704405 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ec0b6c29d1e9466755884cac70f48ead9fe1ee06d1693958ffff3251182nwd8" Feb 16 13:49:55 crc kubenswrapper[4812]: I0216 13:49:55.517416 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-79487dd5dc-7hqsn"] Feb 16 13:49:55 crc kubenswrapper[4812]: E0216 13:49:55.518250 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2" containerName="extract" Feb 16 13:49:55 crc kubenswrapper[4812]: I0216 13:49:55.518268 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2" containerName="extract" Feb 16 13:49:55 crc kubenswrapper[4812]: E0216 13:49:55.518278 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2" containerName="util" Feb 16 13:49:55 crc kubenswrapper[4812]: I0216 13:49:55.518286 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2" containerName="util" Feb 16 13:49:55 crc kubenswrapper[4812]: E0216 13:49:55.518315 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2" containerName="pull" Feb 16 13:49:55 crc kubenswrapper[4812]: I0216 13:49:55.518322 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2" containerName="pull" Feb 16 13:49:55 crc kubenswrapper[4812]: I0216 13:49:55.518477 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2" containerName="extract" Feb 16 13:49:55 crc kubenswrapper[4812]: I0216 13:49:55.519002 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-79487dd5dc-7hqsn" Feb 16 13:49:55 crc kubenswrapper[4812]: I0216 13:49:55.521480 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-9z4ws" Feb 16 13:49:55 crc kubenswrapper[4812]: I0216 13:49:55.533219 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgcgq\" (UniqueName: \"kubernetes.io/projected/d8b435a8-6cec-4517-bf21-3241511a1cbc-kube-api-access-wgcgq\") pod \"openstack-operator-controller-init-79487dd5dc-7hqsn\" (UID: \"d8b435a8-6cec-4517-bf21-3241511a1cbc\") " pod="openstack-operators/openstack-operator-controller-init-79487dd5dc-7hqsn" Feb 16 13:49:55 crc kubenswrapper[4812]: I0216 13:49:55.548623 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-79487dd5dc-7hqsn"] Feb 16 13:49:55 crc kubenswrapper[4812]: I0216 13:49:55.634664 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgcgq\" (UniqueName: \"kubernetes.io/projected/d8b435a8-6cec-4517-bf21-3241511a1cbc-kube-api-access-wgcgq\") pod \"openstack-operator-controller-init-79487dd5dc-7hqsn\" (UID: \"d8b435a8-6cec-4517-bf21-3241511a1cbc\") " pod="openstack-operators/openstack-operator-controller-init-79487dd5dc-7hqsn" Feb 16 13:49:55 crc kubenswrapper[4812]: I0216 13:49:55.664460 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgcgq\" (UniqueName: \"kubernetes.io/projected/d8b435a8-6cec-4517-bf21-3241511a1cbc-kube-api-access-wgcgq\") pod \"openstack-operator-controller-init-79487dd5dc-7hqsn\" (UID: \"d8b435a8-6cec-4517-bf21-3241511a1cbc\") " pod="openstack-operators/openstack-operator-controller-init-79487dd5dc-7hqsn" Feb 16 13:49:55 crc kubenswrapper[4812]: I0216 13:49:55.838196 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-79487dd5dc-7hqsn" Feb 16 13:49:56 crc kubenswrapper[4812]: I0216 13:49:56.062036 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-79487dd5dc-7hqsn"] Feb 16 13:49:56 crc kubenswrapper[4812]: W0216 13:49:56.068608 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8b435a8_6cec_4517_bf21_3241511a1cbc.slice/crio-6e7ca2ebc54a87cb40993c41a2be1b3a61412bd610da8def8e94f872e3186a2e WatchSource:0}: Error finding container 6e7ca2ebc54a87cb40993c41a2be1b3a61412bd610da8def8e94f872e3186a2e: Status 404 returned error can't find the container with id 6e7ca2ebc54a87cb40993c41a2be1b3a61412bd610da8def8e94f872e3186a2e Feb 16 13:49:56 crc kubenswrapper[4812]: I0216 13:49:56.759163 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-79487dd5dc-7hqsn" event={"ID":"d8b435a8-6cec-4517-bf21-3241511a1cbc","Type":"ContainerStarted","Data":"6e7ca2ebc54a87cb40993c41a2be1b3a61412bd610da8def8e94f872e3186a2e"} Feb 16 13:50:03 crc kubenswrapper[4812]: I0216 13:50:03.950313 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-79487dd5dc-7hqsn" event={"ID":"d8b435a8-6cec-4517-bf21-3241511a1cbc","Type":"ContainerStarted","Data":"eee08a160982792b56893ae41cbd11da1b58c0603f17e2deb443d779f4e31cca"} Feb 16 13:50:03 crc kubenswrapper[4812]: I0216 13:50:03.950929 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-79487dd5dc-7hqsn" Feb 16 13:50:03 crc kubenswrapper[4812]: I0216 13:50:03.977689 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-79487dd5dc-7hqsn" podStartSLOduration=2.038386773 podStartE2EDuration="8.977672834s" podCreationTimestamp="2026-02-16 13:49:55 +0000 UTC" firstStartedPulling="2026-02-16 13:49:56.070914968 +0000 UTC m=+1085.135245669" lastFinishedPulling="2026-02-16 13:50:03.010201029 +0000 UTC m=+1092.074531730" observedRunningTime="2026-02-16 13:50:03.976688175 +0000 UTC m=+1093.041018896" watchObservedRunningTime="2026-02-16 13:50:03.977672834 +0000 UTC m=+1093.042003535" Feb 16 13:50:14 crc kubenswrapper[4812]: I0216 13:50:14.549132 4812 patch_prober.go:28] interesting pod/machine-config-daemon-c6mn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:50:14 crc kubenswrapper[4812]: I0216 13:50:14.549819 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:50:15 crc kubenswrapper[4812]: I0216 13:50:15.840275 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-79487dd5dc-7hqsn" Feb 16 13:50:44 crc kubenswrapper[4812]: I0216 13:50:44.549557 4812 patch_prober.go:28] interesting pod/machine-config-daemon-c6mn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:50:44 crc kubenswrapper[4812]: I0216 13:50:44.550219 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.656878 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-rtgvj"] Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.662456 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rtgvj" Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.662655 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-6vps7"] Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.663483 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-6vps7" Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.665877 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-vvkhr" Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.665977 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-5bdbf" Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.667964 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-rtgvj"] Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.675473 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-6vps7"] Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.701248 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-8zb8t"] Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.702287 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-8zb8t" Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.704978 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-tjmjx" Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.704975 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl49s\" (UniqueName: \"kubernetes.io/projected/38ed5722-af29-41e2-a323-dfe0c39d537d-kube-api-access-tl49s\") pod \"designate-operator-controller-manager-6d8bf5c495-8zb8t\" (UID: \"38ed5722-af29-41e2-a323-dfe0c39d537d\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-8zb8t" Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.705060 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zvzj\" (UniqueName: \"kubernetes.io/projected/62b330d8-6f6a-4daf-ba84-fada3debae44-kube-api-access-8zvzj\") pod \"barbican-operator-controller-manager-868647ff47-rtgvj\" (UID: \"62b330d8-6f6a-4daf-ba84-fada3debae44\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rtgvj" Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.705134 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24jl5\" (UniqueName: \"kubernetes.io/projected/484efbc6-46c2-44e3-8edb-8273b347f394-kube-api-access-24jl5\") pod \"cinder-operator-controller-manager-5d946d989d-6vps7\" (UID: \"484efbc6-46c2-44e3-8edb-8273b347f394\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-6vps7" Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.722492 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-8zb8t"] Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.727331 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-ckx77"] Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.728378 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-ckx77" Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.732391 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-wzb8q" Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.733700 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-ckx77"] Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.745506 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-xsnnh"] Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.746800 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xsnnh" Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.753485 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-gh4gq" Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.772067 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5lnxb"] Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.773117 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5lnxb" Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.777200 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-42z87" Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.787678 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-xsnnh"] Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.795570 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-z7qxl"] Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.796345 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-z7qxl" Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.802995 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.803002 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-qlgw9" Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.810257 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zvzj\" (UniqueName: \"kubernetes.io/projected/62b330d8-6f6a-4daf-ba84-fada3debae44-kube-api-access-8zvzj\") pod \"barbican-operator-controller-manager-868647ff47-rtgvj\" (UID: \"62b330d8-6f6a-4daf-ba84-fada3debae44\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rtgvj" Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.810348 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24jl5\" (UniqueName: \"kubernetes.io/projected/484efbc6-46c2-44e3-8edb-8273b347f394-kube-api-access-24jl5\") pod \"cinder-operator-controller-manager-5d946d989d-6vps7\" (UID: \"484efbc6-46c2-44e3-8edb-8273b347f394\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-6vps7" Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.810404 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl49s\" (UniqueName: \"kubernetes.io/projected/38ed5722-af29-41e2-a323-dfe0c39d537d-kube-api-access-tl49s\") pod \"designate-operator-controller-manager-6d8bf5c495-8zb8t\" (UID: \"38ed5722-af29-41e2-a323-dfe0c39d537d\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-8zb8t" Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.814964 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-z7qxl"] Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.819164 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5lnxb"] Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.846744 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-ts4nk"] Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.847821 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ts4nk" Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.852858 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-9v26c" Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.872687 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zvzj\" (UniqueName: \"kubernetes.io/projected/62b330d8-6f6a-4daf-ba84-fada3debae44-kube-api-access-8zvzj\") pod \"barbican-operator-controller-manager-868647ff47-rtgvj\" (UID: \"62b330d8-6f6a-4daf-ba84-fada3debae44\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rtgvj" Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.873464 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl49s\" (UniqueName: \"kubernetes.io/projected/38ed5722-af29-41e2-a323-dfe0c39d537d-kube-api-access-tl49s\") pod \"designate-operator-controller-manager-6d8bf5c495-8zb8t\" (UID: \"38ed5722-af29-41e2-a323-dfe0c39d537d\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-8zb8t" Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.873520 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24jl5\" (UniqueName: \"kubernetes.io/projected/484efbc6-46c2-44e3-8edb-8273b347f394-kube-api-access-24jl5\") pod \"cinder-operator-controller-manager-5d946d989d-6vps7\" (UID: \"484efbc6-46c2-44e3-8edb-8273b347f394\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-6vps7" Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.881554 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-ts4nk"] Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.899705 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-sh8cv"] Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.900809 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-sh8cv" Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.902725 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-88cv5" Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.906285 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-9j44f"] Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.907088 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-9j44f" Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.911237 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-6v57k" Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.912418 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wwqv\" (UniqueName: \"kubernetes.io/projected/aefc705b-fdf3-4a72-9a38-a78907603aca-kube-api-access-8wwqv\") pod \"ironic-operator-controller-manager-554564d7fc-ts4nk\" (UID: \"aefc705b-fdf3-4a72-9a38-a78907603aca\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ts4nk" Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.912464 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqkxx\" (UniqueName: \"kubernetes.io/projected/e9326a1e-ab44-4168-96a4-d140c2f95a88-kube-api-access-qqkxx\") pod \"infra-operator-controller-manager-79d975b745-z7qxl\" (UID: \"e9326a1e-ab44-4168-96a4-d140c2f95a88\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-z7qxl" Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.912608 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfzm2\" (UniqueName: \"kubernetes.io/projected/2e2f91a6-d4f8-422e-bfc1-a78ab10f1338-kube-api-access-lfzm2\") pod \"glance-operator-controller-manager-77987464f4-ckx77\" (UID: \"2e2f91a6-d4f8-422e-bfc1-a78ab10f1338\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-ckx77" Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.912628 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4bxb\" (UniqueName: \"kubernetes.io/projected/d14b07fa-996e-407e-b4ff-9cb90a7c8ca1-kube-api-access-c4bxb\") pod \"manila-operator-controller-manager-54f6768c69-9j44f\" (UID: \"d14b07fa-996e-407e-b4ff-9cb90a7c8ca1\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-9j44f" Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.912643 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e9326a1e-ab44-4168-96a4-d140c2f95a88-cert\") pod \"infra-operator-controller-manager-79d975b745-z7qxl\" (UID: \"e9326a1e-ab44-4168-96a4-d140c2f95a88\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-z7qxl" Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.912676 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwjww\" (UniqueName: \"kubernetes.io/projected/eb78077d-7a72-4293-a0bf-8f7ce62aad8d-kube-api-access-xwjww\") pod \"keystone-operator-controller-manager-b4d948c87-sh8cv\" (UID: \"eb78077d-7a72-4293-a0bf-8f7ce62aad8d\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-sh8cv" Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.912707 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k7dd\" (UniqueName: \"kubernetes.io/projected/961308e3-cfdc-43ac-8cf0-63cdc9e8900d-kube-api-access-6k7dd\") pod \"heat-operator-controller-manager-69f49c598c-xsnnh\" (UID: \"961308e3-cfdc-43ac-8cf0-63cdc9e8900d\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xsnnh" Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.912749 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7xsc\" (UniqueName: \"kubernetes.io/projected/2e961da9-05f1-4eaf-ba3a-5d5bc14b7704-kube-api-access-r7xsc\") pod \"horizon-operator-controller-manager-5b9b8895d5-5lnxb\" (UID: \"2e961da9-05f1-4eaf-ba3a-5d5bc14b7704\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5lnxb" Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.915231 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-9j44f"] Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.921515 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-sh8cv"] Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.943894 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-ggbhg"] Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.944793 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-ggbhg" Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.948796 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-5snjw" Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.985193 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-c6zzj"] Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.986139 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-c6zzj" Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.990628 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-q4hpp" Feb 16 13:50:48 crc kubenswrapper[4812]: I0216 13:50:48.991055 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rtgvj" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.004750 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-6vps7" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.017648 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wwqv\" (UniqueName: \"kubernetes.io/projected/aefc705b-fdf3-4a72-9a38-a78907603aca-kube-api-access-8wwqv\") pod \"ironic-operator-controller-manager-554564d7fc-ts4nk\" (UID: \"aefc705b-fdf3-4a72-9a38-a78907603aca\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ts4nk" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.017706 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqkxx\" (UniqueName: \"kubernetes.io/projected/e9326a1e-ab44-4168-96a4-d140c2f95a88-kube-api-access-qqkxx\") pod \"infra-operator-controller-manager-79d975b745-z7qxl\" (UID: \"e9326a1e-ab44-4168-96a4-d140c2f95a88\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-z7qxl" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.017750 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfzm2\" (UniqueName: \"kubernetes.io/projected/2e2f91a6-d4f8-422e-bfc1-a78ab10f1338-kube-api-access-lfzm2\") pod \"glance-operator-controller-manager-77987464f4-ckx77\" (UID: \"2e2f91a6-d4f8-422e-bfc1-a78ab10f1338\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-ckx77" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.017780 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4bxb\" (UniqueName: \"kubernetes.io/projected/d14b07fa-996e-407e-b4ff-9cb90a7c8ca1-kube-api-access-c4bxb\") pod \"manila-operator-controller-manager-54f6768c69-9j44f\" (UID: \"d14b07fa-996e-407e-b4ff-9cb90a7c8ca1\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-9j44f" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.017808 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e9326a1e-ab44-4168-96a4-d140c2f95a88-cert\") pod \"infra-operator-controller-manager-79d975b745-z7qxl\" (UID: \"e9326a1e-ab44-4168-96a4-d140c2f95a88\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-z7qxl" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.017857 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwjww\" (UniqueName: \"kubernetes.io/projected/eb78077d-7a72-4293-a0bf-8f7ce62aad8d-kube-api-access-xwjww\") pod \"keystone-operator-controller-manager-b4d948c87-sh8cv\" (UID: \"eb78077d-7a72-4293-a0bf-8f7ce62aad8d\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-sh8cv" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.017888 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k7dd\" (UniqueName: \"kubernetes.io/projected/961308e3-cfdc-43ac-8cf0-63cdc9e8900d-kube-api-access-6k7dd\") pod \"heat-operator-controller-manager-69f49c598c-xsnnh\" (UID: \"961308e3-cfdc-43ac-8cf0-63cdc9e8900d\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xsnnh" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.017960 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7xsc\" (UniqueName: \"kubernetes.io/projected/2e961da9-05f1-4eaf-ba3a-5d5bc14b7704-kube-api-access-r7xsc\") pod \"horizon-operator-controller-manager-5b9b8895d5-5lnxb\" (UID: \"2e961da9-05f1-4eaf-ba3a-5d5bc14b7704\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5lnxb" Feb 16 13:50:49 crc kubenswrapper[4812]: E0216 13:50:49.019074 4812 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 13:50:49 crc kubenswrapper[4812]: E0216 13:50:49.019135 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9326a1e-ab44-4168-96a4-d140c2f95a88-cert podName:e9326a1e-ab44-4168-96a4-d140c2f95a88 nodeName:}" failed. No retries permitted until 2026-02-16 13:50:49.519109696 +0000 UTC m=+1138.583440397 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e9326a1e-ab44-4168-96a4-d140c2f95a88-cert") pod "infra-operator-controller-manager-79d975b745-z7qxl" (UID: "e9326a1e-ab44-4168-96a4-d140c2f95a88") : secret "infra-operator-webhook-server-cert" not found Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.026742 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-8zb8t" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.042001 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqkxx\" (UniqueName: \"kubernetes.io/projected/e9326a1e-ab44-4168-96a4-d140c2f95a88-kube-api-access-qqkxx\") pod \"infra-operator-controller-manager-79d975b745-z7qxl\" (UID: \"e9326a1e-ab44-4168-96a4-d140c2f95a88\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-z7qxl" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.042687 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7xsc\" (UniqueName: \"kubernetes.io/projected/2e961da9-05f1-4eaf-ba3a-5d5bc14b7704-kube-api-access-r7xsc\") pod \"horizon-operator-controller-manager-5b9b8895d5-5lnxb\" (UID: \"2e961da9-05f1-4eaf-ba3a-5d5bc14b7704\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5lnxb" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.043211 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwjww\" (UniqueName: \"kubernetes.io/projected/eb78077d-7a72-4293-a0bf-8f7ce62aad8d-kube-api-access-xwjww\") pod \"keystone-operator-controller-manager-b4d948c87-sh8cv\" (UID: \"eb78077d-7a72-4293-a0bf-8f7ce62aad8d\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-sh8cv" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.048053 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4bxb\" (UniqueName: \"kubernetes.io/projected/d14b07fa-996e-407e-b4ff-9cb90a7c8ca1-kube-api-access-c4bxb\") pod \"manila-operator-controller-manager-54f6768c69-9j44f\" (UID: \"d14b07fa-996e-407e-b4ff-9cb90a7c8ca1\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-9j44f" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.048803 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wwqv\" (UniqueName: \"kubernetes.io/projected/aefc705b-fdf3-4a72-9a38-a78907603aca-kube-api-access-8wwqv\") pod \"ironic-operator-controller-manager-554564d7fc-ts4nk\" (UID: \"aefc705b-fdf3-4a72-9a38-a78907603aca\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ts4nk" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.049250 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfzm2\" (UniqueName: \"kubernetes.io/projected/2e2f91a6-d4f8-422e-bfc1-a78ab10f1338-kube-api-access-lfzm2\") pod \"glance-operator-controller-manager-77987464f4-ckx77\" (UID: \"2e2f91a6-d4f8-422e-bfc1-a78ab10f1338\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-ckx77" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.052235 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-ckx77" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.054052 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-84k8d"] Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.062945 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k7dd\" (UniqueName: \"kubernetes.io/projected/961308e3-cfdc-43ac-8cf0-63cdc9e8900d-kube-api-access-6k7dd\") pod \"heat-operator-controller-manager-69f49c598c-xsnnh\" (UID: \"961308e3-cfdc-43ac-8cf0-63cdc9e8900d\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xsnnh" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.075422 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-84k8d" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.079048 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xsnnh" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.079428 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-b8qfn" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.095146 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5lnxb" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.096190 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-7sb4m"] Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.097414 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7sb4m" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.100220 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-dtxh8" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.116814 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-c6zzj"] Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.119086 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fqc8\" (UniqueName: \"kubernetes.io/projected/eec306d2-c02f-4a72-bc69-95ee26d33688-kube-api-access-2fqc8\") pod \"neutron-operator-controller-manager-64ddbf8bb-c6zzj\" (UID: \"eec306d2-c02f-4a72-bc69-95ee26d33688\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-c6zzj" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.119202 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh422\" (UniqueName: \"kubernetes.io/projected/7ef01067-cb64-47cd-a065-9d9677b9646c-kube-api-access-kh422\") pod \"mariadb-operator-controller-manager-6994f66f48-ggbhg\" (UID: \"7ef01067-cb64-47cd-a065-9d9677b9646c\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-ggbhg" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.128369 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-ggbhg"] Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.139011 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-84k8d"] Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.165493 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-7sb4m"] Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.189245 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cdjllh"] Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.190207 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cdjllh" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.194392 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-fkv8d" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.194531 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.208183 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-2vz54"] Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.210499 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-2vz54" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.218608 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-4gbnq" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.220580 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fqc8\" (UniqueName: \"kubernetes.io/projected/eec306d2-c02f-4a72-bc69-95ee26d33688-kube-api-access-2fqc8\") pod \"neutron-operator-controller-manager-64ddbf8bb-c6zzj\" (UID: \"eec306d2-c02f-4a72-bc69-95ee26d33688\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-c6zzj" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.220625 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52lgl\" (UniqueName: \"kubernetes.io/projected/0c6d2754-f4e2-497a-aa47-aa568aa9805c-kube-api-access-52lgl\") pod \"nova-operator-controller-manager-567668f5cf-7sb4m\" (UID: \"0c6d2754-f4e2-497a-aa47-aa568aa9805c\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7sb4m" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.220667 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lhfj\" (UniqueName: \"kubernetes.io/projected/f05e0adf-a8ed-41cf-9808-b10b0c36e48d-kube-api-access-9lhfj\") pod \"octavia-operator-controller-manager-69f8888797-84k8d\" (UID: \"f05e0adf-a8ed-41cf-9808-b10b0c36e48d\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-84k8d" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.220710 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh422\" (UniqueName: \"kubernetes.io/projected/7ef01067-cb64-47cd-a065-9d9677b9646c-kube-api-access-kh422\") pod \"mariadb-operator-controller-manager-6994f66f48-ggbhg\" (UID: \"7ef01067-cb64-47cd-a065-9d9677b9646c\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-ggbhg" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.242589 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ts4nk" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.244859 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fqc8\" (UniqueName: \"kubernetes.io/projected/eec306d2-c02f-4a72-bc69-95ee26d33688-kube-api-access-2fqc8\") pod \"neutron-operator-controller-manager-64ddbf8bb-c6zzj\" (UID: \"eec306d2-c02f-4a72-bc69-95ee26d33688\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-c6zzj" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.256201 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-sh8cv" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.256658 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-c5l89"] Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.258390 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-c5l89" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.264193 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-6djtz" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.275141 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-c5l89"] Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.280723 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-9j44f" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.284189 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh422\" (UniqueName: \"kubernetes.io/projected/7ef01067-cb64-47cd-a065-9d9677b9646c-kube-api-access-kh422\") pod \"mariadb-operator-controller-manager-6994f66f48-ggbhg\" (UID: \"7ef01067-cb64-47cd-a065-9d9677b9646c\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-ggbhg" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.284264 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-2vz54"] Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.287011 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-ggbhg" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.297616 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cdjllh"] Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.315046 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-c6zzj" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.316037 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-lwc2x"] Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.317631 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-lwc2x" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.324437 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp992\" (UniqueName: \"kubernetes.io/projected/06224f00-35c9-4aae-9dbc-c803abd7de2c-kube-api-access-rp992\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cdjllh\" (UID: \"06224f00-35c9-4aae-9dbc-c803abd7de2c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cdjllh" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.324516 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgf4w\" (UniqueName: \"kubernetes.io/projected/ce4b529c-2a7c-4919-8a99-78aa7eae9828-kube-api-access-qgf4w\") pod \"placement-operator-controller-manager-8497b45c89-2vz54\" (UID: \"ce4b529c-2a7c-4919-8a99-78aa7eae9828\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-2vz54" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.324585 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/06224f00-35c9-4aae-9dbc-c803abd7de2c-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cdjllh\" (UID: \"06224f00-35c9-4aae-9dbc-c803abd7de2c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cdjllh" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.324613 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52lgl\" (UniqueName: \"kubernetes.io/projected/0c6d2754-f4e2-497a-aa47-aa568aa9805c-kube-api-access-52lgl\") pod \"nova-operator-controller-manager-567668f5cf-7sb4m\" (UID: \"0c6d2754-f4e2-497a-aa47-aa568aa9805c\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7sb4m" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.324655 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lhfj\" (UniqueName: \"kubernetes.io/projected/f05e0adf-a8ed-41cf-9808-b10b0c36e48d-kube-api-access-9lhfj\") pod \"octavia-operator-controller-manager-69f8888797-84k8d\" (UID: \"f05e0adf-a8ed-41cf-9808-b10b0c36e48d\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-84k8d" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.327017 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-fsrhl" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.336640 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-lwc2x"] Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.378526 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-866896b95f-8plmx"] Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.379501 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-866896b95f-8plmx" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.394825 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-qm4nl" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.398656 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-866896b95f-8plmx"] Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.411141 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52lgl\" (UniqueName: \"kubernetes.io/projected/0c6d2754-f4e2-497a-aa47-aa568aa9805c-kube-api-access-52lgl\") pod \"nova-operator-controller-manager-567668f5cf-7sb4m\" (UID: \"0c6d2754-f4e2-497a-aa47-aa568aa9805c\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7sb4m" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.427201 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7sb4m" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.428731 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27l2c\" (UniqueName: \"kubernetes.io/projected/21b09391-dc85-4bf1-9210-882f3ee0af01-kube-api-access-27l2c\") pod \"swift-operator-controller-manager-68f46476f-lwc2x\" (UID: \"21b09391-dc85-4bf1-9210-882f3ee0af01\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-lwc2x" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.428789 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8clr7\" (UniqueName: \"kubernetes.io/projected/e75f5735-aff0-453a-8be9-4f55966c7232-kube-api-access-8clr7\") pod \"telemetry-operator-controller-manager-866896b95f-8plmx\" (UID: \"e75f5735-aff0-453a-8be9-4f55966c7232\") " pod="openstack-operators/telemetry-operator-controller-manager-866896b95f-8plmx" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.428822 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/06224f00-35c9-4aae-9dbc-c803abd7de2c-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cdjllh\" (UID: \"06224f00-35c9-4aae-9dbc-c803abd7de2c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cdjllh" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.428894 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rp992\" (UniqueName: \"kubernetes.io/projected/06224f00-35c9-4aae-9dbc-c803abd7de2c-kube-api-access-rp992\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cdjllh\" (UID: \"06224f00-35c9-4aae-9dbc-c803abd7de2c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cdjllh" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.428947 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgf4w\" (UniqueName: \"kubernetes.io/projected/ce4b529c-2a7c-4919-8a99-78aa7eae9828-kube-api-access-qgf4w\") pod \"placement-operator-controller-manager-8497b45c89-2vz54\" (UID: \"ce4b529c-2a7c-4919-8a99-78aa7eae9828\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-2vz54" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.428986 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75j4r\" (UniqueName: \"kubernetes.io/projected/fcba7077-c2f4-4d80-ac24-955ddf007acc-kube-api-access-75j4r\") pod \"ovn-operator-controller-manager-d44cf6b75-c5l89\" (UID: \"fcba7077-c2f4-4d80-ac24-955ddf007acc\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-c5l89" Feb 16 13:50:49 crc kubenswrapper[4812]: E0216 13:50:49.429136 4812 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 13:50:49 crc kubenswrapper[4812]: E0216 13:50:49.429184 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06224f00-35c9-4aae-9dbc-c803abd7de2c-cert podName:06224f00-35c9-4aae-9dbc-c803abd7de2c nodeName:}" failed. No retries permitted until 2026-02-16 13:50:49.929165734 +0000 UTC m=+1138.993496435 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/06224f00-35c9-4aae-9dbc-c803abd7de2c-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cdjllh" (UID: "06224f00-35c9-4aae-9dbc-c803abd7de2c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.446019 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lhfj\" (UniqueName: \"kubernetes.io/projected/f05e0adf-a8ed-41cf-9808-b10b0c36e48d-kube-api-access-9lhfj\") pod \"octavia-operator-controller-manager-69f8888797-84k8d\" (UID: \"f05e0adf-a8ed-41cf-9808-b10b0c36e48d\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-84k8d" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.446756 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-mpb9g"] Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.447797 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-mpb9g" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.457516 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-mpb9g"] Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.457986 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-znbms" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.459964 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgf4w\" (UniqueName: \"kubernetes.io/projected/ce4b529c-2a7c-4919-8a99-78aa7eae9828-kube-api-access-qgf4w\") pod \"placement-operator-controller-manager-8497b45c89-2vz54\" (UID: \"ce4b529c-2a7c-4919-8a99-78aa7eae9828\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-2vz54" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.495311 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rp992\" (UniqueName: \"kubernetes.io/projected/06224f00-35c9-4aae-9dbc-c803abd7de2c-kube-api-access-rp992\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cdjllh\" (UID: \"06224f00-35c9-4aae-9dbc-c803abd7de2c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cdjllh" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.533152 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75j4r\" (UniqueName: \"kubernetes.io/projected/fcba7077-c2f4-4d80-ac24-955ddf007acc-kube-api-access-75j4r\") pod \"ovn-operator-controller-manager-d44cf6b75-c5l89\" (UID: \"fcba7077-c2f4-4d80-ac24-955ddf007acc\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-c5l89" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.533228 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27l2c\" (UniqueName: \"kubernetes.io/projected/21b09391-dc85-4bf1-9210-882f3ee0af01-kube-api-access-27l2c\") pod \"swift-operator-controller-manager-68f46476f-lwc2x\" (UID: \"21b09391-dc85-4bf1-9210-882f3ee0af01\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-lwc2x" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.533259 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8clr7\" (UniqueName: \"kubernetes.io/projected/e75f5735-aff0-453a-8be9-4f55966c7232-kube-api-access-8clr7\") pod \"telemetry-operator-controller-manager-866896b95f-8plmx\" (UID: \"e75f5735-aff0-453a-8be9-4f55966c7232\") " pod="openstack-operators/telemetry-operator-controller-manager-866896b95f-8plmx" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.533300 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e9326a1e-ab44-4168-96a4-d140c2f95a88-cert\") pod \"infra-operator-controller-manager-79d975b745-z7qxl\" (UID: \"e9326a1e-ab44-4168-96a4-d140c2f95a88\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-z7qxl" Feb 16 13:50:49 crc kubenswrapper[4812]: E0216 13:50:49.533436 4812 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 13:50:49 crc kubenswrapper[4812]: E0216 13:50:49.533498 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9326a1e-ab44-4168-96a4-d140c2f95a88-cert podName:e9326a1e-ab44-4168-96a4-d140c2f95a88 nodeName:}" failed. No retries permitted until 2026-02-16 13:50:50.533480711 +0000 UTC m=+1139.597811412 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e9326a1e-ab44-4168-96a4-d140c2f95a88-cert") pod "infra-operator-controller-manager-79d975b745-z7qxl" (UID: "e9326a1e-ab44-4168-96a4-d140c2f95a88") : secret "infra-operator-webhook-server-cert" not found Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.548788 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-tzzsc"] Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.549778 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-tzzsc" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.551842 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-j5697" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.570197 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-tzzsc"] Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.574224 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75j4r\" (UniqueName: \"kubernetes.io/projected/fcba7077-c2f4-4d80-ac24-955ddf007acc-kube-api-access-75j4r\") pod \"ovn-operator-controller-manager-d44cf6b75-c5l89\" (UID: \"fcba7077-c2f4-4d80-ac24-955ddf007acc\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-c5l89" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.575591 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27l2c\" (UniqueName: \"kubernetes.io/projected/21b09391-dc85-4bf1-9210-882f3ee0af01-kube-api-access-27l2c\") pod \"swift-operator-controller-manager-68f46476f-lwc2x\" (UID: \"21b09391-dc85-4bf1-9210-882f3ee0af01\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-lwc2x" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.588757 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-2vz54" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.596994 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8clr7\" (UniqueName: \"kubernetes.io/projected/e75f5735-aff0-453a-8be9-4f55966c7232-kube-api-access-8clr7\") pod \"telemetry-operator-controller-manager-866896b95f-8plmx\" (UID: \"e75f5735-aff0-453a-8be9-4f55966c7232\") " pod="openstack-operators/telemetry-operator-controller-manager-866896b95f-8plmx" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.611815 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-778459db5b-d66gm"] Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.612869 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-778459db5b-d66gm"] Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.612977 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-778459db5b-d66gm" Feb 16 13:50:49 crc kubenswrapper[4812]: I0216 13:50:49.613997 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-c5l89" Feb 16 13:50:50 crc kubenswrapper[4812]: I0216 13:50:49.892931 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 16 13:50:50 crc kubenswrapper[4812]: I0216 13:50:49.894704 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nr8zz"] Feb 16 13:50:50 crc kubenswrapper[4812]: I0216 13:50:50.124465 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-lwc2x" Feb 16 13:50:50 crc kubenswrapper[4812]: I0216 13:50:50.126097 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-dbpkw" Feb 16 13:50:50 crc kubenswrapper[4812]: I0216 13:50:50.126140 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-866896b95f-8plmx" Feb 16 13:50:50 crc kubenswrapper[4812]: I0216 13:50:50.128315 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-84k8d" Feb 16 13:50:50 crc kubenswrapper[4812]: I0216 13:50:50.131799 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8vnq\" (UniqueName: \"kubernetes.io/projected/ac3c8476-8d98-47f3-b962-23b404164ac2-kube-api-access-h8vnq\") pod \"test-operator-controller-manager-7866795846-mpb9g\" (UID: \"ac3c8476-8d98-47f3-b962-23b404164ac2\") " pod="openstack-operators/test-operator-controller-manager-7866795846-mpb9g" Feb 16 13:50:50 crc kubenswrapper[4812]: I0216 13:50:50.132196 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nr8zz" Feb 16 13:50:50 crc kubenswrapper[4812]: I0216 13:50:50.126261 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 16 13:50:50 crc kubenswrapper[4812]: I0216 13:50:50.133723 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-fknn2" Feb 16 13:50:50 crc kubenswrapper[4812]: I0216 13:50:50.143993 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/06224f00-35c9-4aae-9dbc-c803abd7de2c-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cdjllh\" (UID: \"06224f00-35c9-4aae-9dbc-c803abd7de2c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cdjllh" Feb 16 13:50:50 crc kubenswrapper[4812]: E0216 13:50:50.144319 4812 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 13:50:50 crc kubenswrapper[4812]: E0216 13:50:50.144376 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06224f00-35c9-4aae-9dbc-c803abd7de2c-cert podName:06224f00-35c9-4aae-9dbc-c803abd7de2c nodeName:}" failed. No retries permitted until 2026-02-16 13:50:51.144357567 +0000 UTC m=+1140.208688268 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/06224f00-35c9-4aae-9dbc-c803abd7de2c-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cdjllh" (UID: "06224f00-35c9-4aae-9dbc-c803abd7de2c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 13:50:50 crc kubenswrapper[4812]: I0216 13:50:50.245288 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8vnq\" (UniqueName: \"kubernetes.io/projected/ac3c8476-8d98-47f3-b962-23b404164ac2-kube-api-access-h8vnq\") pod \"test-operator-controller-manager-7866795846-mpb9g\" (UID: \"ac3c8476-8d98-47f3-b962-23b404164ac2\") " pod="openstack-operators/test-operator-controller-manager-7866795846-mpb9g" Feb 16 13:50:50 crc kubenswrapper[4812]: I0216 13:50:50.245563 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f55w\" (UniqueName: \"kubernetes.io/projected/e90db606-561d-4cbc-b3ca-7078e17685ad-kube-api-access-9f55w\") pod \"rabbitmq-cluster-operator-manager-668c99d594-nr8zz\" (UID: \"e90db606-561d-4cbc-b3ca-7078e17685ad\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nr8zz" Feb 16 13:50:50 crc kubenswrapper[4812]: I0216 13:50:50.245584 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zw7m\" (UniqueName: \"kubernetes.io/projected/7bd980be-8cfe-448f-a2e0-7dae86e075c9-kube-api-access-5zw7m\") pod \"watcher-operator-controller-manager-5db88f68c-tzzsc\" (UID: \"7bd980be-8cfe-448f-a2e0-7dae86e075c9\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-tzzsc" Feb 16 13:50:50 crc kubenswrapper[4812]: I0216 13:50:50.245616 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9xw2\" (UniqueName: \"kubernetes.io/projected/3cdb1565-bb99-4e18-9089-7a2112685704-kube-api-access-p9xw2\") pod \"openstack-operator-controller-manager-778459db5b-d66gm\" (UID: \"3cdb1565-bb99-4e18-9089-7a2112685704\") " pod="openstack-operators/openstack-operator-controller-manager-778459db5b-d66gm" Feb 16 13:50:50 crc kubenswrapper[4812]: I0216 13:50:50.245689 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3cdb1565-bb99-4e18-9089-7a2112685704-metrics-certs\") pod \"openstack-operator-controller-manager-778459db5b-d66gm\" (UID: \"3cdb1565-bb99-4e18-9089-7a2112685704\") " pod="openstack-operators/openstack-operator-controller-manager-778459db5b-d66gm" Feb 16 13:50:50 crc kubenswrapper[4812]: I0216 13:50:50.245728 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3cdb1565-bb99-4e18-9089-7a2112685704-webhook-certs\") pod \"openstack-operator-controller-manager-778459db5b-d66gm\" (UID: \"3cdb1565-bb99-4e18-9089-7a2112685704\") " pod="openstack-operators/openstack-operator-controller-manager-778459db5b-d66gm" Feb 16 13:50:50 crc kubenswrapper[4812]: I0216 13:50:50.511668 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f55w\" (UniqueName: \"kubernetes.io/projected/e90db606-561d-4cbc-b3ca-7078e17685ad-kube-api-access-9f55w\") pod \"rabbitmq-cluster-operator-manager-668c99d594-nr8zz\" (UID: \"e90db606-561d-4cbc-b3ca-7078e17685ad\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nr8zz" Feb 16 13:50:50 crc kubenswrapper[4812]: I0216 13:50:50.511729 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zw7m\" (UniqueName: \"kubernetes.io/projected/7bd980be-8cfe-448f-a2e0-7dae86e075c9-kube-api-access-5zw7m\") pod \"watcher-operator-controller-manager-5db88f68c-tzzsc\" (UID: \"7bd980be-8cfe-448f-a2e0-7dae86e075c9\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-tzzsc" Feb 16 13:50:50 crc kubenswrapper[4812]: I0216 13:50:50.511772 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9xw2\" (UniqueName: \"kubernetes.io/projected/3cdb1565-bb99-4e18-9089-7a2112685704-kube-api-access-p9xw2\") pod \"openstack-operator-controller-manager-778459db5b-d66gm\" (UID: \"3cdb1565-bb99-4e18-9089-7a2112685704\") " pod="openstack-operators/openstack-operator-controller-manager-778459db5b-d66gm" Feb 16 13:50:50 crc kubenswrapper[4812]: I0216 13:50:50.511818 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3cdb1565-bb99-4e18-9089-7a2112685704-metrics-certs\") pod \"openstack-operator-controller-manager-778459db5b-d66gm\" (UID: \"3cdb1565-bb99-4e18-9089-7a2112685704\") " pod="openstack-operators/openstack-operator-controller-manager-778459db5b-d66gm" Feb 16 13:50:50 crc kubenswrapper[4812]: I0216 13:50:50.511878 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3cdb1565-bb99-4e18-9089-7a2112685704-webhook-certs\") pod \"openstack-operator-controller-manager-778459db5b-d66gm\" (UID: \"3cdb1565-bb99-4e18-9089-7a2112685704\") " pod="openstack-operators/openstack-operator-controller-manager-778459db5b-d66gm" Feb 16 13:50:50 crc kubenswrapper[4812]: E0216 13:50:50.512010 4812 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 13:50:50 crc kubenswrapper[4812]: E0216 13:50:50.512063 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3cdb1565-bb99-4e18-9089-7a2112685704-webhook-certs podName:3cdb1565-bb99-4e18-9089-7a2112685704 nodeName:}" failed. No retries permitted until 2026-02-16 13:50:51.012046655 +0000 UTC m=+1140.076377356 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3cdb1565-bb99-4e18-9089-7a2112685704-webhook-certs") pod "openstack-operator-controller-manager-778459db5b-d66gm" (UID: "3cdb1565-bb99-4e18-9089-7a2112685704") : secret "webhook-server-cert" not found Feb 16 13:50:50 crc kubenswrapper[4812]: E0216 13:50:50.515496 4812 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 13:50:50 crc kubenswrapper[4812]: E0216 13:50:50.515560 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3cdb1565-bb99-4e18-9089-7a2112685704-metrics-certs podName:3cdb1565-bb99-4e18-9089-7a2112685704 nodeName:}" failed. No retries permitted until 2026-02-16 13:50:51.015540896 +0000 UTC m=+1140.079871597 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3cdb1565-bb99-4e18-9089-7a2112685704-metrics-certs") pod "openstack-operator-controller-manager-778459db5b-d66gm" (UID: "3cdb1565-bb99-4e18-9089-7a2112685704") : secret "metrics-server-cert" not found Feb 16 13:50:50 crc kubenswrapper[4812]: I0216 13:50:50.515836 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8vnq\" (UniqueName: \"kubernetes.io/projected/ac3c8476-8d98-47f3-b962-23b404164ac2-kube-api-access-h8vnq\") pod \"test-operator-controller-manager-7866795846-mpb9g\" (UID: \"ac3c8476-8d98-47f3-b962-23b404164ac2\") " pod="openstack-operators/test-operator-controller-manager-7866795846-mpb9g" Feb 16 13:50:50 crc kubenswrapper[4812]: I0216 13:50:50.516523 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nr8zz"] Feb 16 13:50:50 crc kubenswrapper[4812]: I0216 13:50:50.618920 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e9326a1e-ab44-4168-96a4-d140c2f95a88-cert\") pod \"infra-operator-controller-manager-79d975b745-z7qxl\" (UID: \"e9326a1e-ab44-4168-96a4-d140c2f95a88\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-z7qxl" Feb 16 13:50:50 crc kubenswrapper[4812]: E0216 13:50:50.619301 4812 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 13:50:50 crc kubenswrapper[4812]: E0216 13:50:50.619465 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9326a1e-ab44-4168-96a4-d140c2f95a88-cert podName:e9326a1e-ab44-4168-96a4-d140c2f95a88 nodeName:}" failed. No retries permitted until 2026-02-16 13:50:52.619403069 +0000 UTC m=+1141.683733770 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e9326a1e-ab44-4168-96a4-d140c2f95a88-cert") pod "infra-operator-controller-manager-79d975b745-z7qxl" (UID: "e9326a1e-ab44-4168-96a4-d140c2f95a88") : secret "infra-operator-webhook-server-cert" not found Feb 16 13:50:50 crc kubenswrapper[4812]: I0216 13:50:50.751353 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-mpb9g" Feb 16 13:50:50 crc kubenswrapper[4812]: I0216 13:50:50.820208 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f55w\" (UniqueName: \"kubernetes.io/projected/e90db606-561d-4cbc-b3ca-7078e17685ad-kube-api-access-9f55w\") pod \"rabbitmq-cluster-operator-manager-668c99d594-nr8zz\" (UID: \"e90db606-561d-4cbc-b3ca-7078e17685ad\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nr8zz" Feb 16 13:50:50 crc kubenswrapper[4812]: I0216 13:50:50.820227 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zw7m\" (UniqueName: \"kubernetes.io/projected/7bd980be-8cfe-448f-a2e0-7dae86e075c9-kube-api-access-5zw7m\") pod \"watcher-operator-controller-manager-5db88f68c-tzzsc\" (UID: \"7bd980be-8cfe-448f-a2e0-7dae86e075c9\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-tzzsc" Feb 16 13:50:50 crc kubenswrapper[4812]: I0216 13:50:50.821290 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9xw2\" (UniqueName: \"kubernetes.io/projected/3cdb1565-bb99-4e18-9089-7a2112685704-kube-api-access-p9xw2\") pod \"openstack-operator-controller-manager-778459db5b-d66gm\" (UID: \"3cdb1565-bb99-4e18-9089-7a2112685704\") " pod="openstack-operators/openstack-operator-controller-manager-778459db5b-d66gm" Feb 16 13:50:50 crc kubenswrapper[4812]: I0216 13:50:50.861051 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-8zb8t"] Feb 16 13:50:50 crc kubenswrapper[4812]: I0216 13:50:50.879612 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-rtgvj"] Feb 16 13:50:50 crc kubenswrapper[4812]: I0216 13:50:50.891267 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-6vps7"] Feb 16 13:50:50 crc kubenswrapper[4812]: I0216 13:50:50.906998 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-tzzsc" Feb 16 13:50:50 crc kubenswrapper[4812]: I0216 13:50:50.944761 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nr8zz" Feb 16 13:50:50 crc kubenswrapper[4812]: W0216 13:50:50.950637 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38ed5722_af29_41e2_a323_dfe0c39d537d.slice/crio-a42be4536a3ecdbec0cff5be054d75299039f5764ea56e6ac90b95640dc84d1d WatchSource:0}: Error finding container a42be4536a3ecdbec0cff5be054d75299039f5764ea56e6ac90b95640dc84d1d: Status 404 returned error can't find the container with id a42be4536a3ecdbec0cff5be054d75299039f5764ea56e6ac90b95640dc84d1d Feb 16 13:50:50 crc kubenswrapper[4812]: W0216 13:50:50.969250 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod484efbc6_46c2_44e3_8edb_8273b347f394.slice/crio-e2c71ef2d10903e4d0694b207e0720c6a979c315c579d57fc1505afc6a5459f6 WatchSource:0}: Error finding container e2c71ef2d10903e4d0694b207e0720c6a979c315c579d57fc1505afc6a5459f6: Status 404 returned error can't find the container with id e2c71ef2d10903e4d0694b207e0720c6a979c315c579d57fc1505afc6a5459f6 Feb 16 13:50:51 crc kubenswrapper[4812]: I0216 13:50:51.025579 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3cdb1565-bb99-4e18-9089-7a2112685704-webhook-certs\") pod \"openstack-operator-controller-manager-778459db5b-d66gm\" (UID: \"3cdb1565-bb99-4e18-9089-7a2112685704\") " pod="openstack-operators/openstack-operator-controller-manager-778459db5b-d66gm" Feb 16 13:50:51 crc kubenswrapper[4812]: I0216 13:50:51.025743 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3cdb1565-bb99-4e18-9089-7a2112685704-metrics-certs\") pod \"openstack-operator-controller-manager-778459db5b-d66gm\" (UID: \"3cdb1565-bb99-4e18-9089-7a2112685704\") " pod="openstack-operators/openstack-operator-controller-manager-778459db5b-d66gm" Feb 16 13:50:51 crc kubenswrapper[4812]: E0216 13:50:51.025870 4812 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 13:50:51 crc kubenswrapper[4812]: E0216 13:50:51.025929 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3cdb1565-bb99-4e18-9089-7a2112685704-metrics-certs podName:3cdb1565-bb99-4e18-9089-7a2112685704 nodeName:}" failed. No retries permitted until 2026-02-16 13:50:52.025911796 +0000 UTC m=+1141.090242497 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3cdb1565-bb99-4e18-9089-7a2112685704-metrics-certs") pod "openstack-operator-controller-manager-778459db5b-d66gm" (UID: "3cdb1565-bb99-4e18-9089-7a2112685704") : secret "metrics-server-cert" not found Feb 16 13:50:51 crc kubenswrapper[4812]: E0216 13:50:51.025985 4812 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 13:50:51 crc kubenswrapper[4812]: E0216 13:50:51.026033 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3cdb1565-bb99-4e18-9089-7a2112685704-webhook-certs podName:3cdb1565-bb99-4e18-9089-7a2112685704 nodeName:}" failed. No retries permitted until 2026-02-16 13:50:52.026020449 +0000 UTC m=+1141.090351160 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3cdb1565-bb99-4e18-9089-7a2112685704-webhook-certs") pod "openstack-operator-controller-manager-778459db5b-d66gm" (UID: "3cdb1565-bb99-4e18-9089-7a2112685704") : secret "webhook-server-cert" not found Feb 16 13:50:51 crc kubenswrapper[4812]: I0216 13:50:51.228895 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/06224f00-35c9-4aae-9dbc-c803abd7de2c-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cdjllh\" (UID: \"06224f00-35c9-4aae-9dbc-c803abd7de2c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cdjllh" Feb 16 13:50:51 crc kubenswrapper[4812]: E0216 13:50:51.229054 4812 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 13:50:51 crc kubenswrapper[4812]: E0216 13:50:51.229107 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06224f00-35c9-4aae-9dbc-c803abd7de2c-cert podName:06224f00-35c9-4aae-9dbc-c803abd7de2c nodeName:}" failed. No retries permitted until 2026-02-16 13:50:53.229089032 +0000 UTC m=+1142.293419733 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/06224f00-35c9-4aae-9dbc-c803abd7de2c-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cdjllh" (UID: "06224f00-35c9-4aae-9dbc-c803abd7de2c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 13:50:51 crc kubenswrapper[4812]: I0216 13:50:51.524870 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rtgvj" event={"ID":"62b330d8-6f6a-4daf-ba84-fada3debae44","Type":"ContainerStarted","Data":"34e4faa697a52dfacd9a1fda297f9d0b45ad3b4fea48817adaf5adecbac56c7f"} Feb 16 13:50:51 crc kubenswrapper[4812]: I0216 13:50:51.528557 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-8zb8t" event={"ID":"38ed5722-af29-41e2-a323-dfe0c39d537d","Type":"ContainerStarted","Data":"a42be4536a3ecdbec0cff5be054d75299039f5764ea56e6ac90b95640dc84d1d"} Feb 16 13:50:51 crc kubenswrapper[4812]: I0216 13:50:51.531749 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-6vps7" event={"ID":"484efbc6-46c2-44e3-8edb-8273b347f394","Type":"ContainerStarted","Data":"e2c71ef2d10903e4d0694b207e0720c6a979c315c579d57fc1505afc6a5459f6"} Feb 16 13:50:51 crc kubenswrapper[4812]: I0216 13:50:51.915829 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5lnxb"] Feb 16 13:50:51 crc kubenswrapper[4812]: I0216 13:50:51.915860 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-ckx77"] Feb 16 13:50:51 crc kubenswrapper[4812]: I0216 13:50:51.923962 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-9j44f"] Feb 16 13:50:51 crc kubenswrapper[4812]: I0216 13:50:51.931606 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-7sb4m"] Feb 16 13:50:51 crc kubenswrapper[4812]: I0216 13:50:51.947305 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-2vz54"] Feb 16 13:50:51 crc kubenswrapper[4812]: W0216 13:50:51.950359 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd14b07fa_996e_407e_b4ff_9cb90a7c8ca1.slice/crio-d179e26ae59d8d77fb6a9ed5d91dc0a1f6521370c9474141ea60a3fafc572d34 WatchSource:0}: Error finding container d179e26ae59d8d77fb6a9ed5d91dc0a1f6521370c9474141ea60a3fafc572d34: Status 404 returned error can't find the container with id d179e26ae59d8d77fb6a9ed5d91dc0a1f6521370c9474141ea60a3fafc572d34 Feb 16 13:50:51 crc kubenswrapper[4812]: I0216 13:50:51.960958 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-xsnnh"] Feb 16 13:50:51 crc kubenswrapper[4812]: I0216 13:50:51.969265 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-c6zzj"] Feb 16 13:50:51 crc kubenswrapper[4812]: W0216 13:50:51.982822 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e2f91a6_d4f8_422e_bfc1_a78ab10f1338.slice/crio-591c66e919d2266397b0e068f4db6a07e4326da7fc21aada547d3b747493ce55 WatchSource:0}: Error finding container 591c66e919d2266397b0e068f4db6a07e4326da7fc21aada547d3b747493ce55: Status 404 returned error can't find the container with id 591c66e919d2266397b0e068f4db6a07e4326da7fc21aada547d3b747493ce55 Feb 16 13:50:51 crc kubenswrapper[4812]: I0216 13:50:51.984907 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-c5l89"] Feb 16 13:50:52 crc kubenswrapper[4812]: W0216 13:50:52.006713 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ef01067_cb64_47cd_a065_9d9677b9646c.slice/crio-4a8b78bca79028d2cb6120b3a4749110649e6f2bdaa5217e79d32d16a5dfc316 WatchSource:0}: Error finding container 4a8b78bca79028d2cb6120b3a4749110649e6f2bdaa5217e79d32d16a5dfc316: Status 404 returned error can't find the container with id 4a8b78bca79028d2cb6120b3a4749110649e6f2bdaa5217e79d32d16a5dfc316 Feb 16 13:50:52 crc kubenswrapper[4812]: I0216 13:50:52.049190 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-ggbhg"] Feb 16 13:50:52 crc kubenswrapper[4812]: I0216 13:50:52.051345 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3cdb1565-bb99-4e18-9089-7a2112685704-metrics-certs\") pod \"openstack-operator-controller-manager-778459db5b-d66gm\" (UID: \"3cdb1565-bb99-4e18-9089-7a2112685704\") " pod="openstack-operators/openstack-operator-controller-manager-778459db5b-d66gm" Feb 16 13:50:52 crc kubenswrapper[4812]: I0216 13:50:52.051495 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3cdb1565-bb99-4e18-9089-7a2112685704-webhook-certs\") pod \"openstack-operator-controller-manager-778459db5b-d66gm\" (UID: \"3cdb1565-bb99-4e18-9089-7a2112685704\") " pod="openstack-operators/openstack-operator-controller-manager-778459db5b-d66gm" Feb 16 13:50:52 crc kubenswrapper[4812]: E0216 13:50:52.051703 4812 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 13:50:52 crc kubenswrapper[4812]: E0216 13:50:52.051780 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3cdb1565-bb99-4e18-9089-7a2112685704-webhook-certs podName:3cdb1565-bb99-4e18-9089-7a2112685704 nodeName:}" failed. No retries permitted until 2026-02-16 13:50:54.051754131 +0000 UTC m=+1143.116084832 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3cdb1565-bb99-4e18-9089-7a2112685704-webhook-certs") pod "openstack-operator-controller-manager-778459db5b-d66gm" (UID: "3cdb1565-bb99-4e18-9089-7a2112685704") : secret "webhook-server-cert" not found Feb 16 13:50:52 crc kubenswrapper[4812]: E0216 13:50:52.053071 4812 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 13:50:52 crc kubenswrapper[4812]: E0216 13:50:52.056076 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3cdb1565-bb99-4e18-9089-7a2112685704-metrics-certs podName:3cdb1565-bb99-4e18-9089-7a2112685704 nodeName:}" failed. No retries permitted until 2026-02-16 13:50:54.056048495 +0000 UTC m=+1143.120379196 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3cdb1565-bb99-4e18-9089-7a2112685704-metrics-certs") pod "openstack-operator-controller-manager-778459db5b-d66gm" (UID: "3cdb1565-bb99-4e18-9089-7a2112685704") : secret "metrics-server-cert" not found Feb 16 13:50:52 crc kubenswrapper[4812]: I0216 13:50:52.076698 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-866896b95f-8plmx"] Feb 16 13:50:52 crc kubenswrapper[4812]: W0216 13:50:52.084153 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode75f5735_aff0_453a_8be9_4f55966c7232.slice/crio-15cd0cd5b7f94962d8a53ea7055d4a45c752758b9d2e4443d23a6dd611b6a7c4 WatchSource:0}: Error finding container 15cd0cd5b7f94962d8a53ea7055d4a45c752758b9d2e4443d23a6dd611b6a7c4: Status 404 returned error can't find the container with id 15cd0cd5b7f94962d8a53ea7055d4a45c752758b9d2e4443d23a6dd611b6a7c4 Feb 16 13:50:52 crc kubenswrapper[4812]: W0216 13:50:52.319005 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf05e0adf_a8ed_41cf_9808_b10b0c36e48d.slice/crio-d99f80ae7c3428990679775be3704affeb8a127f3f8438d26527e00ee76651cb WatchSource:0}: Error finding container d99f80ae7c3428990679775be3704affeb8a127f3f8438d26527e00ee76651cb: Status 404 returned error can't find the container with id d99f80ae7c3428990679775be3704affeb8a127f3f8438d26527e00ee76651cb Feb 16 13:50:52 crc kubenswrapper[4812]: I0216 13:50:52.305422 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-lwc2x"] Feb 16 13:50:52 crc kubenswrapper[4812]: I0216 13:50:52.320340 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-sh8cv"] Feb 16 13:50:52 crc kubenswrapper[4812]: E0216 13:50:52.333642 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-27l2c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-lwc2x_openstack-operators(21b09391-dc85-4bf1-9210-882f3ee0af01): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 13:50:52 crc kubenswrapper[4812]: E0216 13:50:52.334913 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-lwc2x" podUID="21b09391-dc85-4bf1-9210-882f3ee0af01" Feb 16 13:50:52 crc kubenswrapper[4812]: I0216 13:50:52.335747 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nr8zz"] Feb 16 13:50:52 crc kubenswrapper[4812]: E0216 13:50:52.337259 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5zw7m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-tzzsc_openstack-operators(7bd980be-8cfe-448f-a2e0-7dae86e075c9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 13:50:52 crc kubenswrapper[4812]: E0216 13:50:52.338756 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-tzzsc" podUID="7bd980be-8cfe-448f-a2e0-7dae86e075c9" Feb 16 13:50:52 crc kubenswrapper[4812]: I0216 13:50:52.344905 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-84k8d"] Feb 16 13:50:52 crc kubenswrapper[4812]: E0216 13:50:52.353188 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h8vnq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-mpb9g_openstack-operators(ac3c8476-8d98-47f3-b962-23b404164ac2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 13:50:52 crc kubenswrapper[4812]: E0216 13:50:52.354238 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8wwqv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-554564d7fc-ts4nk_openstack-operators(aefc705b-fdf3-4a72-9a38-a78907603aca): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 13:50:52 crc kubenswrapper[4812]: E0216 13:50:52.354348 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-7866795846-mpb9g" podUID="ac3c8476-8d98-47f3-b962-23b404164ac2" Feb 16 13:50:52 crc kubenswrapper[4812]: E0216 13:50:52.355729 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ts4nk" podUID="aefc705b-fdf3-4a72-9a38-a78907603aca" Feb 16 13:50:52 crc kubenswrapper[4812]: I0216 13:50:52.357328 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-tzzsc"] Feb 16 13:50:52 crc kubenswrapper[4812]: E0216 13:50:52.357473 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9f55w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-nr8zz_openstack-operators(e90db606-561d-4cbc-b3ca-7078e17685ad): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 13:50:52 crc kubenswrapper[4812]: E0216 13:50:52.358866 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nr8zz" podUID="e90db606-561d-4cbc-b3ca-7078e17685ad" Feb 16 13:50:52 crc kubenswrapper[4812]: I0216 13:50:52.365903 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-mpb9g"] Feb 16 13:50:52 crc kubenswrapper[4812]: I0216 13:50:52.371698 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-ts4nk"] Feb 16 13:50:52 crc kubenswrapper[4812]: I0216 13:50:52.543665 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-9j44f" event={"ID":"d14b07fa-996e-407e-b4ff-9cb90a7c8ca1","Type":"ContainerStarted","Data":"d179e26ae59d8d77fb6a9ed5d91dc0a1f6521370c9474141ea60a3fafc572d34"} Feb 16 13:50:52 crc kubenswrapper[4812]: I0216 13:50:52.551601 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-mpb9g" event={"ID":"ac3c8476-8d98-47f3-b962-23b404164ac2","Type":"ContainerStarted","Data":"052a05427b9754c08e22a4e43c50b96b5e77483487942d5af847cb68f65fd44a"} Feb 16 13:50:52 crc kubenswrapper[4812]: E0216 13:50:52.552795 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-mpb9g" podUID="ac3c8476-8d98-47f3-b962-23b404164ac2" Feb 16 13:50:52 crc kubenswrapper[4812]: I0216 13:50:52.553401 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-c6zzj" event={"ID":"eec306d2-c02f-4a72-bc69-95ee26d33688","Type":"ContainerStarted","Data":"8610ebec3c2082d6c3a12b7aec4347fa7444afcba9e3bcaecd58b03e946c25aa"} Feb 16 13:50:52 crc kubenswrapper[4812]: I0216 13:50:52.554701 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-lwc2x" event={"ID":"21b09391-dc85-4bf1-9210-882f3ee0af01","Type":"ContainerStarted","Data":"66c5cfd7d7b0450d7b999d1c03247ffebf82330785fdc2f514b2c8b600b2c909"} Feb 16 13:50:52 crc kubenswrapper[4812]: E0216 13:50:52.556110 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-lwc2x" podUID="21b09391-dc85-4bf1-9210-882f3ee0af01" Feb 16 13:50:52 crc kubenswrapper[4812]: I0216 13:50:52.557278 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7sb4m" event={"ID":"0c6d2754-f4e2-497a-aa47-aa568aa9805c","Type":"ContainerStarted","Data":"2e539a6ad824cc03d214530b0624218babf2d9a046481c20f33863d7ff07a374"} Feb 16 13:50:52 crc kubenswrapper[4812]: I0216 13:50:52.559995 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-866896b95f-8plmx" event={"ID":"e75f5735-aff0-453a-8be9-4f55966c7232","Type":"ContainerStarted","Data":"15cd0cd5b7f94962d8a53ea7055d4a45c752758b9d2e4443d23a6dd611b6a7c4"} Feb 16 13:50:52 crc kubenswrapper[4812]: I0216 13:50:52.562638 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nr8zz" event={"ID":"e90db606-561d-4cbc-b3ca-7078e17685ad","Type":"ContainerStarted","Data":"0886dc9d8864ad4f04192f6de6c4a43c526dc2c181ef7eb97df195bd2685c7f1"} Feb 16 13:50:52 crc kubenswrapper[4812]: E0216 13:50:52.564868 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nr8zz" podUID="e90db606-561d-4cbc-b3ca-7078e17685ad" Feb 16 13:50:52 crc kubenswrapper[4812]: I0216 13:50:52.565357 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-ckx77" event={"ID":"2e2f91a6-d4f8-422e-bfc1-a78ab10f1338","Type":"ContainerStarted","Data":"591c66e919d2266397b0e068f4db6a07e4326da7fc21aada547d3b747493ce55"} Feb 16 13:50:52 crc kubenswrapper[4812]: I0216 13:50:52.572304 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5lnxb" event={"ID":"2e961da9-05f1-4eaf-ba3a-5d5bc14b7704","Type":"ContainerStarted","Data":"e8930126d0b734a5181b3901a08e13580ffc02fbb9f8d20fda841451206268b7"} Feb 16 13:50:52 crc kubenswrapper[4812]: I0216 13:50:52.574278 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-84k8d" event={"ID":"f05e0adf-a8ed-41cf-9808-b10b0c36e48d","Type":"ContainerStarted","Data":"d99f80ae7c3428990679775be3704affeb8a127f3f8438d26527e00ee76651cb"} Feb 16 13:50:52 crc kubenswrapper[4812]: I0216 13:50:52.576127 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-ggbhg" event={"ID":"7ef01067-cb64-47cd-a065-9d9677b9646c","Type":"ContainerStarted","Data":"4a8b78bca79028d2cb6120b3a4749110649e6f2bdaa5217e79d32d16a5dfc316"} Feb 16 13:50:52 crc kubenswrapper[4812]: I0216 13:50:52.580182 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-2vz54" event={"ID":"ce4b529c-2a7c-4919-8a99-78aa7eae9828","Type":"ContainerStarted","Data":"e7a48171e22b62bfef18c33141610e0c44ddd16499744ee94dae47427d0abb92"} Feb 16 13:50:52 crc kubenswrapper[4812]: I0216 13:50:52.582576 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-tzzsc" event={"ID":"7bd980be-8cfe-448f-a2e0-7dae86e075c9","Type":"ContainerStarted","Data":"9dc2670bd1174856140a0c8382977e06239de1cab75df026f7a6e46b97abde12"} Feb 16 13:50:52 crc kubenswrapper[4812]: E0216 13:50:52.586727 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-tzzsc" podUID="7bd980be-8cfe-448f-a2e0-7dae86e075c9" Feb 16 13:50:52 crc kubenswrapper[4812]: I0216 13:50:52.587132 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ts4nk" event={"ID":"aefc705b-fdf3-4a72-9a38-a78907603aca","Type":"ContainerStarted","Data":"30532e19cfbb307d6f79f83dfa29c0a93a177a16ca7fae5ccdaf1f74279b7b4a"} Feb 16 13:50:52 crc kubenswrapper[4812]: E0216 13:50:52.588018 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ts4nk" podUID="aefc705b-fdf3-4a72-9a38-a78907603aca" Feb 16 13:50:52 crc kubenswrapper[4812]: I0216 13:50:52.591837 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-sh8cv" event={"ID":"eb78077d-7a72-4293-a0bf-8f7ce62aad8d","Type":"ContainerStarted","Data":"1b9f3336c841b78922960ccc1dba58d3f02535f6069688f082a98388846763e3"} Feb 16 13:50:52 crc kubenswrapper[4812]: I0216 13:50:52.594098 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-c5l89" event={"ID":"fcba7077-c2f4-4d80-ac24-955ddf007acc","Type":"ContainerStarted","Data":"d03f18d78cd355291ef1ed498d7e209e94f798bea06de86ba3ec2dfdbff139a0"} Feb 16 13:50:52 crc kubenswrapper[4812]: I0216 13:50:52.597987 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xsnnh" event={"ID":"961308e3-cfdc-43ac-8cf0-63cdc9e8900d","Type":"ContainerStarted","Data":"b1a40f991f23d164978d2f9efee1efd5083c0a3e5710017fb47ba843db20d8ec"} Feb 16 13:50:52 crc kubenswrapper[4812]: I0216 13:50:52.660720 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e9326a1e-ab44-4168-96a4-d140c2f95a88-cert\") pod \"infra-operator-controller-manager-79d975b745-z7qxl\" (UID: \"e9326a1e-ab44-4168-96a4-d140c2f95a88\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-z7qxl" Feb 16 13:50:52 crc kubenswrapper[4812]: E0216 13:50:52.660886 4812 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 13:50:52 crc kubenswrapper[4812]: E0216 13:50:52.660971 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9326a1e-ab44-4168-96a4-d140c2f95a88-cert podName:e9326a1e-ab44-4168-96a4-d140c2f95a88 nodeName:}" failed. No retries permitted until 2026-02-16 13:50:56.660947219 +0000 UTC m=+1145.725277961 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e9326a1e-ab44-4168-96a4-d140c2f95a88-cert") pod "infra-operator-controller-manager-79d975b745-z7qxl" (UID: "e9326a1e-ab44-4168-96a4-d140c2f95a88") : secret "infra-operator-webhook-server-cert" not found Feb 16 13:50:53 crc kubenswrapper[4812]: I0216 13:50:53.296522 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/06224f00-35c9-4aae-9dbc-c803abd7de2c-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cdjllh\" (UID: \"06224f00-35c9-4aae-9dbc-c803abd7de2c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cdjllh" Feb 16 13:50:53 crc kubenswrapper[4812]: E0216 13:50:53.296689 4812 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 13:50:53 crc kubenswrapper[4812]: E0216 13:50:53.296802 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06224f00-35c9-4aae-9dbc-c803abd7de2c-cert podName:06224f00-35c9-4aae-9dbc-c803abd7de2c nodeName:}" failed. No retries permitted until 2026-02-16 13:50:57.296780105 +0000 UTC m=+1146.361110806 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/06224f00-35c9-4aae-9dbc-c803abd7de2c-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cdjllh" (UID: "06224f00-35c9-4aae-9dbc-c803abd7de2c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 13:50:53 crc kubenswrapper[4812]: E0216 13:50:53.611982 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-mpb9g" podUID="ac3c8476-8d98-47f3-b962-23b404164ac2" Feb 16 13:50:53 crc kubenswrapper[4812]: E0216 13:50:53.612089 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ts4nk" podUID="aefc705b-fdf3-4a72-9a38-a78907603aca" Feb 16 13:50:53 crc kubenswrapper[4812]: E0216 13:50:53.612132 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-tzzsc" podUID="7bd980be-8cfe-448f-a2e0-7dae86e075c9" Feb 16 13:50:53 crc kubenswrapper[4812]: E0216 13:50:53.613926 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nr8zz" podUID="e90db606-561d-4cbc-b3ca-7078e17685ad" Feb 16 13:50:53 crc kubenswrapper[4812]: E0216 13:50:53.614138 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-lwc2x" podUID="21b09391-dc85-4bf1-9210-882f3ee0af01" Feb 16 13:50:54 crc kubenswrapper[4812]: I0216 13:50:54.064132 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3cdb1565-bb99-4e18-9089-7a2112685704-metrics-certs\") pod \"openstack-operator-controller-manager-778459db5b-d66gm\" (UID: \"3cdb1565-bb99-4e18-9089-7a2112685704\") " pod="openstack-operators/openstack-operator-controller-manager-778459db5b-d66gm" Feb 16 13:50:54 crc kubenswrapper[4812]: I0216 13:50:54.064242 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3cdb1565-bb99-4e18-9089-7a2112685704-webhook-certs\") pod \"openstack-operator-controller-manager-778459db5b-d66gm\" (UID: \"3cdb1565-bb99-4e18-9089-7a2112685704\") " pod="openstack-operators/openstack-operator-controller-manager-778459db5b-d66gm" Feb 16 13:50:54 crc kubenswrapper[4812]: E0216 13:50:54.064967 4812 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 13:50:54 crc kubenswrapper[4812]: E0216 13:50:54.065055 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3cdb1565-bb99-4e18-9089-7a2112685704-metrics-certs podName:3cdb1565-bb99-4e18-9089-7a2112685704 nodeName:}" failed. No retries permitted until 2026-02-16 13:50:58.065031668 +0000 UTC m=+1147.129362369 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3cdb1565-bb99-4e18-9089-7a2112685704-metrics-certs") pod "openstack-operator-controller-manager-778459db5b-d66gm" (UID: "3cdb1565-bb99-4e18-9089-7a2112685704") : secret "metrics-server-cert" not found Feb 16 13:50:54 crc kubenswrapper[4812]: E0216 13:50:54.065640 4812 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 13:50:54 crc kubenswrapper[4812]: E0216 13:50:54.065682 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3cdb1565-bb99-4e18-9089-7a2112685704-webhook-certs podName:3cdb1565-bb99-4e18-9089-7a2112685704 nodeName:}" failed. No retries permitted until 2026-02-16 13:50:58.065672476 +0000 UTC m=+1147.130003187 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3cdb1565-bb99-4e18-9089-7a2112685704-webhook-certs") pod "openstack-operator-controller-manager-778459db5b-d66gm" (UID: "3cdb1565-bb99-4e18-9089-7a2112685704") : secret "webhook-server-cert" not found Feb 16 13:50:56 crc kubenswrapper[4812]: E0216 13:50:56.732358 4812 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 13:50:56 crc kubenswrapper[4812]: E0216 13:50:56.732989 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9326a1e-ab44-4168-96a4-d140c2f95a88-cert podName:e9326a1e-ab44-4168-96a4-d140c2f95a88 nodeName:}" failed. No retries permitted until 2026-02-16 13:51:04.732969212 +0000 UTC m=+1153.797299913 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e9326a1e-ab44-4168-96a4-d140c2f95a88-cert") pod "infra-operator-controller-manager-79d975b745-z7qxl" (UID: "e9326a1e-ab44-4168-96a4-d140c2f95a88") : secret "infra-operator-webhook-server-cert" not found Feb 16 13:50:56 crc kubenswrapper[4812]: I0216 13:50:56.732238 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e9326a1e-ab44-4168-96a4-d140c2f95a88-cert\") pod \"infra-operator-controller-manager-79d975b745-z7qxl\" (UID: \"e9326a1e-ab44-4168-96a4-d140c2f95a88\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-z7qxl" Feb 16 13:50:57 crc kubenswrapper[4812]: I0216 13:50:57.343103 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/06224f00-35c9-4aae-9dbc-c803abd7de2c-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cdjllh\" (UID: \"06224f00-35c9-4aae-9dbc-c803abd7de2c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cdjllh" Feb 16 13:50:57 crc kubenswrapper[4812]: E0216 13:50:57.343333 4812 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 13:50:57 crc kubenswrapper[4812]: E0216 13:50:57.343430 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06224f00-35c9-4aae-9dbc-c803abd7de2c-cert podName:06224f00-35c9-4aae-9dbc-c803abd7de2c nodeName:}" failed. No retries permitted until 2026-02-16 13:51:05.343407866 +0000 UTC m=+1154.407738567 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/06224f00-35c9-4aae-9dbc-c803abd7de2c-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cdjllh" (UID: "06224f00-35c9-4aae-9dbc-c803abd7de2c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 13:50:58 crc kubenswrapper[4812]: I0216 13:50:58.156145 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3cdb1565-bb99-4e18-9089-7a2112685704-metrics-certs\") pod \"openstack-operator-controller-manager-778459db5b-d66gm\" (UID: \"3cdb1565-bb99-4e18-9089-7a2112685704\") " pod="openstack-operators/openstack-operator-controller-manager-778459db5b-d66gm" Feb 16 13:50:58 crc kubenswrapper[4812]: I0216 13:50:58.156550 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3cdb1565-bb99-4e18-9089-7a2112685704-webhook-certs\") pod \"openstack-operator-controller-manager-778459db5b-d66gm\" (UID: \"3cdb1565-bb99-4e18-9089-7a2112685704\") " pod="openstack-operators/openstack-operator-controller-manager-778459db5b-d66gm" Feb 16 13:50:58 crc kubenswrapper[4812]: E0216 13:50:58.156338 4812 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 13:50:58 crc kubenswrapper[4812]: E0216 13:50:58.156634 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3cdb1565-bb99-4e18-9089-7a2112685704-metrics-certs podName:3cdb1565-bb99-4e18-9089-7a2112685704 nodeName:}" failed. No retries permitted until 2026-02-16 13:51:06.156615954 +0000 UTC m=+1155.220946655 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3cdb1565-bb99-4e18-9089-7a2112685704-metrics-certs") pod "openstack-operator-controller-manager-778459db5b-d66gm" (UID: "3cdb1565-bb99-4e18-9089-7a2112685704") : secret "metrics-server-cert" not found Feb 16 13:50:58 crc kubenswrapper[4812]: E0216 13:50:58.156747 4812 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 13:50:58 crc kubenswrapper[4812]: E0216 13:50:58.156808 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3cdb1565-bb99-4e18-9089-7a2112685704-webhook-certs podName:3cdb1565-bb99-4e18-9089-7a2112685704 nodeName:}" failed. No retries permitted until 2026-02-16 13:51:06.156793989 +0000 UTC m=+1155.221124690 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3cdb1565-bb99-4e18-9089-7a2112685704-webhook-certs") pod "openstack-operator-controller-manager-778459db5b-d66gm" (UID: "3cdb1565-bb99-4e18-9089-7a2112685704") : secret "webhook-server-cert" not found Feb 16 13:51:04 crc kubenswrapper[4812]: E0216 13:51:04.272258 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.39:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1" Feb 16 13:51:04 crc kubenswrapper[4812]: E0216 13:51:04.272617 4812 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.39:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1" Feb 16 13:51:04 crc kubenswrapper[4812]: E0216 13:51:04.272826 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.39:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8clr7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-866896b95f-8plmx_openstack-operators(e75f5735-aff0-453a-8be9-4f55966c7232): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 13:51:04 crc kubenswrapper[4812]: E0216 13:51:04.274050 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-866896b95f-8plmx" podUID="e75f5735-aff0-453a-8be9-4f55966c7232" Feb 16 13:51:04 crc kubenswrapper[4812]: I0216 13:51:04.771179 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e9326a1e-ab44-4168-96a4-d140c2f95a88-cert\") pod \"infra-operator-controller-manager-79d975b745-z7qxl\" (UID: \"e9326a1e-ab44-4168-96a4-d140c2f95a88\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-z7qxl" Feb 16 13:51:04 crc kubenswrapper[4812]: I0216 13:51:04.778380 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e9326a1e-ab44-4168-96a4-d140c2f95a88-cert\") pod \"infra-operator-controller-manager-79d975b745-z7qxl\" (UID: \"e9326a1e-ab44-4168-96a4-d140c2f95a88\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-z7qxl" Feb 16 13:51:04 crc kubenswrapper[4812]: E0216 13:51:04.778869 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.39:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-866896b95f-8plmx" podUID="e75f5735-aff0-453a-8be9-4f55966c7232" Feb 16 13:51:05 crc kubenswrapper[4812]: I0216 13:51:05.036842 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-qlgw9" Feb 16 13:51:05 crc kubenswrapper[4812]: I0216 13:51:05.044549 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-z7qxl" Feb 16 13:51:05 crc kubenswrapper[4812]: I0216 13:51:05.380792 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/06224f00-35c9-4aae-9dbc-c803abd7de2c-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cdjllh\" (UID: \"06224f00-35c9-4aae-9dbc-c803abd7de2c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cdjllh" Feb 16 13:51:05 crc kubenswrapper[4812]: I0216 13:51:05.385317 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/06224f00-35c9-4aae-9dbc-c803abd7de2c-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cdjllh\" (UID: \"06224f00-35c9-4aae-9dbc-c803abd7de2c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cdjllh" Feb 16 13:51:05 crc kubenswrapper[4812]: I0216 13:51:05.417157 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-fkv8d" Feb 16 13:51:05 crc kubenswrapper[4812]: I0216 13:51:05.425760 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cdjllh" Feb 16 13:51:06 crc kubenswrapper[4812]: I0216 13:51:06.191281 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3cdb1565-bb99-4e18-9089-7a2112685704-metrics-certs\") pod \"openstack-operator-controller-manager-778459db5b-d66gm\" (UID: \"3cdb1565-bb99-4e18-9089-7a2112685704\") " pod="openstack-operators/openstack-operator-controller-manager-778459db5b-d66gm" Feb 16 13:51:06 crc kubenswrapper[4812]: I0216 13:51:06.191672 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3cdb1565-bb99-4e18-9089-7a2112685704-webhook-certs\") pod \"openstack-operator-controller-manager-778459db5b-d66gm\" (UID: \"3cdb1565-bb99-4e18-9089-7a2112685704\") " pod="openstack-operators/openstack-operator-controller-manager-778459db5b-d66gm" Feb 16 13:51:06 crc kubenswrapper[4812]: I0216 13:51:06.196335 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3cdb1565-bb99-4e18-9089-7a2112685704-webhook-certs\") pod \"openstack-operator-controller-manager-778459db5b-d66gm\" (UID: \"3cdb1565-bb99-4e18-9089-7a2112685704\") " pod="openstack-operators/openstack-operator-controller-manager-778459db5b-d66gm" Feb 16 13:51:06 crc kubenswrapper[4812]: I0216 13:51:06.197252 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3cdb1565-bb99-4e18-9089-7a2112685704-metrics-certs\") pod \"openstack-operator-controller-manager-778459db5b-d66gm\" (UID: \"3cdb1565-bb99-4e18-9089-7a2112685704\") " pod="openstack-operators/openstack-operator-controller-manager-778459db5b-d66gm" Feb 16 13:51:06 crc kubenswrapper[4812]: I0216 13:51:06.225319 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-dbpkw" Feb 16 13:51:06 crc kubenswrapper[4812]: I0216 13:51:06.233774 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-778459db5b-d66gm" Feb 16 13:51:08 crc kubenswrapper[4812]: E0216 13:51:08.797995 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c" Feb 16 13:51:08 crc kubenswrapper[4812]: E0216 13:51:08.798523 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c4bxb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-54f6768c69-9j44f_openstack-operators(d14b07fa-996e-407e-b4ff-9cb90a7c8ca1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 13:51:08 crc kubenswrapper[4812]: E0216 13:51:08.800063 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-9j44f" podUID="d14b07fa-996e-407e-b4ff-9cb90a7c8ca1" Feb 16 13:51:09 crc kubenswrapper[4812]: E0216 13:51:09.430647 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" Feb 16 13:51:09 crc kubenswrapper[4812]: E0216 13:51:09.430823 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kh422,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6994f66f48-ggbhg_openstack-operators(7ef01067-cb64-47cd-a065-9d9677b9646c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 13:51:09 crc kubenswrapper[4812]: E0216 13:51:09.432304 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-ggbhg" podUID="7ef01067-cb64-47cd-a065-9d9677b9646c" Feb 16 13:51:09 crc kubenswrapper[4812]: E0216 13:51:09.810180 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-ggbhg" podUID="7ef01067-cb64-47cd-a065-9d9677b9646c" Feb 16 13:51:09 crc kubenswrapper[4812]: E0216 13:51:09.810265 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-9j44f" podUID="d14b07fa-996e-407e-b4ff-9cb90a7c8ca1" Feb 16 13:51:10 crc kubenswrapper[4812]: E0216 13:51:10.499519 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd" Feb 16 13:51:10 crc kubenswrapper[4812]: E0216 13:51:10.499729 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qgf4w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-2vz54_openstack-operators(ce4b529c-2a7c-4919-8a99-78aa7eae9828): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 13:51:10 crc kubenswrapper[4812]: E0216 13:51:10.500904 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-2vz54" podUID="ce4b529c-2a7c-4919-8a99-78aa7eae9828" Feb 16 13:51:10 crc kubenswrapper[4812]: E0216 13:51:10.816857 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-2vz54" podUID="ce4b529c-2a7c-4919-8a99-78aa7eae9828" Feb 16 13:51:12 crc kubenswrapper[4812]: E0216 13:51:12.740348 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da" Feb 16 13:51:12 crc kubenswrapper[4812]: E0216 13:51:12.740612 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r7xsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5b9b8895d5-5lnxb_openstack-operators(2e961da9-05f1-4eaf-ba3a-5d5bc14b7704): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 13:51:12 crc kubenswrapper[4812]: E0216 13:51:12.742322 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5lnxb" podUID="2e961da9-05f1-4eaf-ba3a-5d5bc14b7704" Feb 16 13:51:12 crc kubenswrapper[4812]: E0216 13:51:12.829048 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5lnxb" podUID="2e961da9-05f1-4eaf-ba3a-5d5bc14b7704" Feb 16 13:51:13 crc kubenswrapper[4812]: E0216 13:51:13.512787 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34" Feb 16 13:51:13 crc kubenswrapper[4812]: E0216 13:51:13.512960 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9lhfj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-69f8888797-84k8d_openstack-operators(f05e0adf-a8ed-41cf-9808-b10b0c36e48d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 13:51:13 crc kubenswrapper[4812]: E0216 13:51:13.514123 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-84k8d" podUID="f05e0adf-a8ed-41cf-9808-b10b0c36e48d" Feb 16 13:51:13 crc kubenswrapper[4812]: E0216 13:51:13.843177 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-84k8d" podUID="f05e0adf-a8ed-41cf-9808-b10b0c36e48d" Feb 16 13:51:13 crc kubenswrapper[4812]: E0216 13:51:13.972531 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 16 13:51:13 crc kubenswrapper[4812]: E0216 13:51:13.972944 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-52lgl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-7sb4m_openstack-operators(0c6d2754-f4e2-497a-aa47-aa568aa9805c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 13:51:13 crc kubenswrapper[4812]: E0216 13:51:13.974181 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7sb4m" podUID="0c6d2754-f4e2-497a-aa47-aa568aa9805c" Feb 16 13:51:14 crc kubenswrapper[4812]: I0216 13:51:14.549558 4812 patch_prober.go:28] interesting pod/machine-config-daemon-c6mn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:51:14 crc kubenswrapper[4812]: I0216 13:51:14.549646 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:51:14 crc kubenswrapper[4812]: I0216 13:51:14.549780 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" Feb 16 13:51:14 crc kubenswrapper[4812]: I0216 13:51:14.551365 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0779ef9b368371eaae022df11f7e6d3b1b2344936b30d611f68295ab80bea825"} pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 13:51:14 crc kubenswrapper[4812]: I0216 13:51:14.551423 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" containerID="cri-o://0779ef9b368371eaae022df11f7e6d3b1b2344936b30d611f68295ab80bea825" gracePeriod=600 Feb 16 13:51:14 crc kubenswrapper[4812]: I0216 13:51:14.847401 4812 generic.go:334] "Generic (PLEG): container finished" podID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerID="0779ef9b368371eaae022df11f7e6d3b1b2344936b30d611f68295ab80bea825" exitCode=0 Feb 16 13:51:14 crc kubenswrapper[4812]: I0216 13:51:14.847493 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" event={"ID":"3c55e49a-a30d-4950-a690-c33d9f8a31e0","Type":"ContainerDied","Data":"0779ef9b368371eaae022df11f7e6d3b1b2344936b30d611f68295ab80bea825"} Feb 16 13:51:14 crc kubenswrapper[4812]: I0216 13:51:14.847578 4812 scope.go:117] "RemoveContainer" containerID="7cdd40ec1858c86be76b1abaa1c0c47ea05268682d8c62fb36cfc403870db38c" Feb 16 13:51:14 crc kubenswrapper[4812]: E0216 13:51:14.848785 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7sb4m" podUID="0c6d2754-f4e2-497a-aa47-aa568aa9805c" Feb 16 13:51:16 crc kubenswrapper[4812]: E0216 13:51:16.675191 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 16 13:51:16 crc kubenswrapper[4812]: E0216 13:51:16.675398 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xwjww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-sh8cv_openstack-operators(eb78077d-7a72-4293-a0bf-8f7ce62aad8d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 13:51:16 crc kubenswrapper[4812]: E0216 13:51:16.677167 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-sh8cv" podUID="eb78077d-7a72-4293-a0bf-8f7ce62aad8d" Feb 16 13:51:16 crc kubenswrapper[4812]: E0216 13:51:16.864026 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-sh8cv" podUID="eb78077d-7a72-4293-a0bf-8f7ce62aad8d" Feb 16 13:51:18 crc kubenswrapper[4812]: I0216 13:51:18.756428 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-778459db5b-d66gm"] Feb 16 13:51:18 crc kubenswrapper[4812]: W0216 13:51:18.804291 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3cdb1565_bb99_4e18_9089_7a2112685704.slice/crio-28f0c3405ea8cb1e17dfb5d997fb75ccee3042c5173c5fe184d762a6a8cf651a WatchSource:0}: Error finding container 28f0c3405ea8cb1e17dfb5d997fb75ccee3042c5173c5fe184d762a6a8cf651a: Status 404 returned error can't find the container with id 28f0c3405ea8cb1e17dfb5d997fb75ccee3042c5173c5fe184d762a6a8cf651a Feb 16 13:51:18 crc kubenswrapper[4812]: I0216 13:51:18.826551 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cdjllh"] Feb 16 13:51:18 crc kubenswrapper[4812]: W0216 13:51:18.860987 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06224f00_35c9_4aae_9dbc_c803abd7de2c.slice/crio-906a2e5dd7e6d5f2aa8d1061acad40587e545d43a13d882436da496d30f72524 WatchSource:0}: Error finding container 906a2e5dd7e6d5f2aa8d1061acad40587e545d43a13d882436da496d30f72524: Status 404 returned error can't find the container with id 906a2e5dd7e6d5f2aa8d1061acad40587e545d43a13d882436da496d30f72524 Feb 16 13:51:18 crc kubenswrapper[4812]: I0216 13:51:18.876083 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-z7qxl"] Feb 16 13:51:19 crc kubenswrapper[4812]: I0216 13:51:18.925942 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ts4nk" event={"ID":"aefc705b-fdf3-4a72-9a38-a78907603aca","Type":"ContainerStarted","Data":"33a9d66087a41f578c02482f17fdc09946cc26afb2d52aeff725a822ba1015a1"} Feb 16 13:51:19 crc kubenswrapper[4812]: I0216 13:51:18.926866 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ts4nk" Feb 16 13:51:19 crc kubenswrapper[4812]: I0216 13:51:18.976369 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-c5l89" event={"ID":"fcba7077-c2f4-4d80-ac24-955ddf007acc","Type":"ContainerStarted","Data":"5d50cf86c4cc523145a92c702dc22153327119289bbf6982c0c00cf698dfa4e3"} Feb 16 13:51:19 crc kubenswrapper[4812]: I0216 13:51:18.976641 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-c5l89" Feb 16 13:51:19 crc kubenswrapper[4812]: I0216 13:51:18.979761 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-lwc2x" event={"ID":"21b09391-dc85-4bf1-9210-882f3ee0af01","Type":"ContainerStarted","Data":"318bf736cac995db6480abe6eecf48163e00aaa1dcb290e2289e03b92afdbcfd"} Feb 16 13:51:19 crc kubenswrapper[4812]: I0216 13:51:18.980309 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-lwc2x" Feb 16 13:51:19 crc kubenswrapper[4812]: I0216 13:51:18.981905 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-8zb8t" event={"ID":"38ed5722-af29-41e2-a323-dfe0c39d537d","Type":"ContainerStarted","Data":"7e7ff50c5873c4a5e0b5a04a18bd029ae20fda80b7ba3459f25a3661d4793a79"} Feb 16 13:51:19 crc kubenswrapper[4812]: I0216 13:51:18.982403 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-8zb8t" Feb 16 13:51:19 crc kubenswrapper[4812]: I0216 13:51:18.987879 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rtgvj" event={"ID":"62b330d8-6f6a-4daf-ba84-fada3debae44","Type":"ContainerStarted","Data":"7785acb6de4a68af559681598f01c24dfc8cbed64382627e20903dba5d729fa1"} Feb 16 13:51:19 crc kubenswrapper[4812]: I0216 13:51:18.988809 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rtgvj" Feb 16 13:51:19 crc kubenswrapper[4812]: I0216 13:51:19.026016 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-778459db5b-d66gm" event={"ID":"3cdb1565-bb99-4e18-9089-7a2112685704","Type":"ContainerStarted","Data":"28f0c3405ea8cb1e17dfb5d997fb75ccee3042c5173c5fe184d762a6a8cf651a"} Feb 16 13:51:19 crc kubenswrapper[4812]: I0216 13:51:19.030378 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-6vps7" event={"ID":"484efbc6-46c2-44e3-8edb-8273b347f394","Type":"ContainerStarted","Data":"5a474a2cb94ecd3eba0cc12bcb44ced83ec04a06ad663e6133469de442fcfd04"} Feb 16 13:51:19 crc kubenswrapper[4812]: I0216 13:51:19.031303 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-6vps7" Feb 16 13:51:19 crc kubenswrapper[4812]: I0216 13:51:19.032406 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cdjllh" event={"ID":"06224f00-35c9-4aae-9dbc-c803abd7de2c","Type":"ContainerStarted","Data":"906a2e5dd7e6d5f2aa8d1061acad40587e545d43a13d882436da496d30f72524"} Feb 16 13:51:19 crc kubenswrapper[4812]: I0216 13:51:19.033783 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-c6zzj" event={"ID":"eec306d2-c02f-4a72-bc69-95ee26d33688","Type":"ContainerStarted","Data":"9a7cde8626815088892ec30341a8312cd83c66fd211baf59b4948d548564fefb"} Feb 16 13:51:19 crc kubenswrapper[4812]: I0216 13:51:19.034362 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-c6zzj" Feb 16 13:51:19 crc kubenswrapper[4812]: I0216 13:51:19.038432 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-ckx77" event={"ID":"2e2f91a6-d4f8-422e-bfc1-a78ab10f1338","Type":"ContainerStarted","Data":"f27dde5d37d0660a44805e479740e29529c4c56ec6fd8d089de0d4d9d4f1e98e"} Feb 16 13:51:19 crc kubenswrapper[4812]: I0216 13:51:19.039126 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-ckx77" Feb 16 13:51:19 crc kubenswrapper[4812]: I0216 13:51:19.042859 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xsnnh" event={"ID":"961308e3-cfdc-43ac-8cf0-63cdc9e8900d","Type":"ContainerStarted","Data":"7f41272c17baa2124ca9b543434dcf30842ede56d2f1fd4a1c79dbdaef4aae8e"} Feb 16 13:51:19 crc kubenswrapper[4812]: I0216 13:51:19.043624 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xsnnh" Feb 16 13:51:19 crc kubenswrapper[4812]: I0216 13:51:19.155184 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-lwc2x" podStartSLOduration=5.138014329 podStartE2EDuration="31.155168445s" podCreationTimestamp="2026-02-16 13:50:48 +0000 UTC" firstStartedPulling="2026-02-16 13:50:52.33064841 +0000 UTC m=+1141.394979111" lastFinishedPulling="2026-02-16 13:51:18.347802526 +0000 UTC m=+1167.412133227" observedRunningTime="2026-02-16 13:51:19.151994304 +0000 UTC m=+1168.216325005" watchObservedRunningTime="2026-02-16 13:51:19.155168445 +0000 UTC m=+1168.219499156" Feb 16 13:51:19 crc kubenswrapper[4812]: I0216 13:51:19.171892 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-6vps7" podStartSLOduration=7.7212197830000004 podStartE2EDuration="31.171875287s" podCreationTimestamp="2026-02-16 13:50:48 +0000 UTC" firstStartedPulling="2026-02-16 13:50:50.97956489 +0000 UTC m=+1140.043895591" lastFinishedPulling="2026-02-16 13:51:14.430220384 +0000 UTC m=+1163.494551095" observedRunningTime="2026-02-16 13:51:19.166367348 +0000 UTC m=+1168.230698049" watchObservedRunningTime="2026-02-16 13:51:19.171875287 +0000 UTC m=+1168.236205988" Feb 16 13:51:19 crc kubenswrapper[4812]: I0216 13:51:19.201545 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-c6zzj" podStartSLOduration=8.746275075 podStartE2EDuration="31.201524351s" podCreationTimestamp="2026-02-16 13:50:48 +0000 UTC" firstStartedPulling="2026-02-16 13:50:51.97503512 +0000 UTC m=+1141.039365821" lastFinishedPulling="2026-02-16 13:51:14.430284386 +0000 UTC m=+1163.494615097" observedRunningTime="2026-02-16 13:51:19.19523942 +0000 UTC m=+1168.259570121" watchObservedRunningTime="2026-02-16 13:51:19.201524351 +0000 UTC m=+1168.265855062" Feb 16 13:51:19 crc kubenswrapper[4812]: I0216 13:51:19.243402 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rtgvj" podStartSLOduration=7.793930259 podStartE2EDuration="31.243375418s" podCreationTimestamp="2026-02-16 13:50:48 +0000 UTC" firstStartedPulling="2026-02-16 13:50:50.97956361 +0000 UTC m=+1140.043894311" lastFinishedPulling="2026-02-16 13:51:14.429008769 +0000 UTC m=+1163.493339470" observedRunningTime="2026-02-16 13:51:19.231541707 +0000 UTC m=+1168.295872428" watchObservedRunningTime="2026-02-16 13:51:19.243375418 +0000 UTC m=+1168.307706119" Feb 16 13:51:19 crc kubenswrapper[4812]: I0216 13:51:19.283334 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xsnnh" podStartSLOduration=8.85467809 podStartE2EDuration="31.283318999s" podCreationTimestamp="2026-02-16 13:50:48 +0000 UTC" firstStartedPulling="2026-02-16 13:50:52.001636757 +0000 UTC m=+1141.065967458" lastFinishedPulling="2026-02-16 13:51:14.430277666 +0000 UTC m=+1163.494608367" observedRunningTime="2026-02-16 13:51:19.26532844 +0000 UTC m=+1168.329659151" watchObservedRunningTime="2026-02-16 13:51:19.283318999 +0000 UTC m=+1168.347649700" Feb 16 13:51:19 crc kubenswrapper[4812]: I0216 13:51:19.284381 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ts4nk" podStartSLOduration=5.345035976 podStartE2EDuration="31.284377329s" podCreationTimestamp="2026-02-16 13:50:48 +0000 UTC" firstStartedPulling="2026-02-16 13:50:52.354117526 +0000 UTC m=+1141.418448227" lastFinishedPulling="2026-02-16 13:51:18.293458879 +0000 UTC m=+1167.357789580" observedRunningTime="2026-02-16 13:51:19.281216548 +0000 UTC m=+1168.345547239" watchObservedRunningTime="2026-02-16 13:51:19.284377329 +0000 UTC m=+1168.348708030" Feb 16 13:51:19 crc kubenswrapper[4812]: I0216 13:51:19.307765 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-8zb8t" podStartSLOduration=7.850135008 podStartE2EDuration="31.307746943s" podCreationTimestamp="2026-02-16 13:50:48 +0000 UTC" firstStartedPulling="2026-02-16 13:50:50.972663161 +0000 UTC m=+1140.036993862" lastFinishedPulling="2026-02-16 13:51:14.430275096 +0000 UTC m=+1163.494605797" observedRunningTime="2026-02-16 13:51:19.302255725 +0000 UTC m=+1168.366586426" watchObservedRunningTime="2026-02-16 13:51:19.307746943 +0000 UTC m=+1168.372077634" Feb 16 13:51:19 crc kubenswrapper[4812]: I0216 13:51:19.335882 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-c5l89" podStartSLOduration=8.859343864 podStartE2EDuration="31.335859343s" podCreationTimestamp="2026-02-16 13:50:48 +0000 UTC" firstStartedPulling="2026-02-16 13:50:51.953789758 +0000 UTC m=+1141.018120459" lastFinishedPulling="2026-02-16 13:51:14.430305247 +0000 UTC m=+1163.494635938" observedRunningTime="2026-02-16 13:51:19.32847642 +0000 UTC m=+1168.392807141" watchObservedRunningTime="2026-02-16 13:51:19.335859343 +0000 UTC m=+1168.400190054" Feb 16 13:51:19 crc kubenswrapper[4812]: I0216 13:51:19.360010 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-ckx77" podStartSLOduration=8.92198804 podStartE2EDuration="31.359991869s" podCreationTimestamp="2026-02-16 13:50:48 +0000 UTC" firstStartedPulling="2026-02-16 13:50:51.990825055 +0000 UTC m=+1141.055155756" lastFinishedPulling="2026-02-16 13:51:14.428828894 +0000 UTC m=+1163.493159585" observedRunningTime="2026-02-16 13:51:19.352837963 +0000 UTC m=+1168.417168664" watchObservedRunningTime="2026-02-16 13:51:19.359991869 +0000 UTC m=+1168.424322570" Feb 16 13:51:20 crc kubenswrapper[4812]: I0216 13:51:20.050746 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-z7qxl" event={"ID":"e9326a1e-ab44-4168-96a4-d140c2f95a88","Type":"ContainerStarted","Data":"73537e5af7830438f4924173db6890da9b9d26caea564372ddb49c6c7d276ac3"} Feb 16 13:51:20 crc kubenswrapper[4812]: I0216 13:51:20.053229 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" event={"ID":"3c55e49a-a30d-4950-a690-c33d9f8a31e0","Type":"ContainerStarted","Data":"e326161e933a75a00a9297a9e1cbd3d6a1ed2f661892851e02b5e7109aebd29d"} Feb 16 13:51:20 crc kubenswrapper[4812]: I0216 13:51:20.058901 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nr8zz" event={"ID":"e90db606-561d-4cbc-b3ca-7078e17685ad","Type":"ContainerStarted","Data":"e3e21ca8814ebf8d600480e3fca78cb9d66951d950eae88e084bd052a524153b"} Feb 16 13:51:20 crc kubenswrapper[4812]: I0216 13:51:20.061180 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-tzzsc" event={"ID":"7bd980be-8cfe-448f-a2e0-7dae86e075c9","Type":"ContainerStarted","Data":"2e8d7b9f1c79e3bb7b7f903efd20d95e0e91001b81e33182171b4b2202d7b9bb"} Feb 16 13:51:20 crc kubenswrapper[4812]: I0216 13:51:20.061597 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-tzzsc" Feb 16 13:51:20 crc kubenswrapper[4812]: I0216 13:51:20.063058 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-778459db5b-d66gm" event={"ID":"3cdb1565-bb99-4e18-9089-7a2112685704","Type":"ContainerStarted","Data":"5e4d09b03eda169562ec5d286c85c62670b855ca3f2187ddc82f4419613194aa"} Feb 16 13:51:20 crc kubenswrapper[4812]: I0216 13:51:20.063412 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-778459db5b-d66gm" Feb 16 13:51:20 crc kubenswrapper[4812]: I0216 13:51:20.064633 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-866896b95f-8plmx" event={"ID":"e75f5735-aff0-453a-8be9-4f55966c7232","Type":"ContainerStarted","Data":"2cde47c3a15812d663c3955a7fb441134e3cd664143de7b4b8583a3e3da322a5"} Feb 16 13:51:20 crc kubenswrapper[4812]: I0216 13:51:20.065024 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-866896b95f-8plmx" Feb 16 13:51:20 crc kubenswrapper[4812]: I0216 13:51:20.066594 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-mpb9g" event={"ID":"ac3c8476-8d98-47f3-b962-23b404164ac2","Type":"ContainerStarted","Data":"a53c54df0191e57219f29bc6ca20e05ed00ec8b2a591cb7935963b856361c43d"} Feb 16 13:51:20 crc kubenswrapper[4812]: I0216 13:51:20.066988 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-mpb9g" Feb 16 13:51:20 crc kubenswrapper[4812]: I0216 13:51:20.107948 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-866896b95f-8plmx" podStartSLOduration=4.038745479 podStartE2EDuration="31.107931006s" podCreationTimestamp="2026-02-16 13:50:49 +0000 UTC" firstStartedPulling="2026-02-16 13:50:52.088160001 +0000 UTC m=+1141.152490702" lastFinishedPulling="2026-02-16 13:51:19.157345528 +0000 UTC m=+1168.221676229" observedRunningTime="2026-02-16 13:51:20.105065563 +0000 UTC m=+1169.169396264" watchObservedRunningTime="2026-02-16 13:51:20.107931006 +0000 UTC m=+1169.172261707" Feb 16 13:51:20 crc kubenswrapper[4812]: I0216 13:51:20.122647 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-tzzsc" podStartSLOduration=5.167776458 podStartE2EDuration="31.122624119s" podCreationTimestamp="2026-02-16 13:50:49 +0000 UTC" firstStartedPulling="2026-02-16 13:50:52.336955791 +0000 UTC m=+1141.401286492" lastFinishedPulling="2026-02-16 13:51:18.291803452 +0000 UTC m=+1167.356134153" observedRunningTime="2026-02-16 13:51:20.121509757 +0000 UTC m=+1169.185840478" watchObservedRunningTime="2026-02-16 13:51:20.122624119 +0000 UTC m=+1169.186954820" Feb 16 13:51:20 crc kubenswrapper[4812]: I0216 13:51:20.141231 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nr8zz" podStartSLOduration=5.058812139 podStartE2EDuration="31.141216625s" podCreationTimestamp="2026-02-16 13:50:49 +0000 UTC" firstStartedPulling="2026-02-16 13:50:52.357301048 +0000 UTC m=+1141.421631749" lastFinishedPulling="2026-02-16 13:51:18.439705534 +0000 UTC m=+1167.504036235" observedRunningTime="2026-02-16 13:51:20.13825086 +0000 UTC m=+1169.202581561" watchObservedRunningTime="2026-02-16 13:51:20.141216625 +0000 UTC m=+1169.205547326" Feb 16 13:51:20 crc kubenswrapper[4812]: I0216 13:51:20.241510 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-mpb9g" podStartSLOduration=5.246752295 podStartE2EDuration="31.241490555s" podCreationTimestamp="2026-02-16 13:50:49 +0000 UTC" firstStartedPulling="2026-02-16 13:50:52.353009354 +0000 UTC m=+1141.417340065" lastFinishedPulling="2026-02-16 13:51:18.347747634 +0000 UTC m=+1167.412078325" observedRunningTime="2026-02-16 13:51:20.23956801 +0000 UTC m=+1169.303898741" watchObservedRunningTime="2026-02-16 13:51:20.241490555 +0000 UTC m=+1169.305821256" Feb 16 13:51:20 crc kubenswrapper[4812]: I0216 13:51:20.292211 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-778459db5b-d66gm" podStartSLOduration=31.292189676 podStartE2EDuration="31.292189676s" podCreationTimestamp="2026-02-16 13:50:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:51:20.289959492 +0000 UTC m=+1169.354290193" watchObservedRunningTime="2026-02-16 13:51:20.292189676 +0000 UTC m=+1169.356520377" Feb 16 13:51:23 crc kubenswrapper[4812]: I0216 13:51:23.233859 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-ggbhg" event={"ID":"7ef01067-cb64-47cd-a065-9d9677b9646c","Type":"ContainerStarted","Data":"2ed78c1e2e40bcf76384b84b9cd63cd8b195575efeb3e560130b84e3de5e7c74"} Feb 16 13:51:23 crc kubenswrapper[4812]: I0216 13:51:23.235683 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-ggbhg" Feb 16 13:51:23 crc kubenswrapper[4812]: I0216 13:51:23.258910 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-ggbhg" podStartSLOduration=5.890131227 podStartE2EDuration="35.258888511s" podCreationTimestamp="2026-02-16 13:50:48 +0000 UTC" firstStartedPulling="2026-02-16 13:50:52.011870292 +0000 UTC m=+1141.076200993" lastFinishedPulling="2026-02-16 13:51:21.380627566 +0000 UTC m=+1170.444958277" observedRunningTime="2026-02-16 13:51:23.251437037 +0000 UTC m=+1172.315767738" watchObservedRunningTime="2026-02-16 13:51:23.258888511 +0000 UTC m=+1172.323219212" Feb 16 13:51:24 crc kubenswrapper[4812]: I0216 13:51:24.245252 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-9j44f" event={"ID":"d14b07fa-996e-407e-b4ff-9cb90a7c8ca1","Type":"ContainerStarted","Data":"651c8f30993b516b5b68309b105cd22d194794aa2982857efa5a53c6d8db0e36"} Feb 16 13:51:24 crc kubenswrapper[4812]: I0216 13:51:24.247197 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-9j44f" Feb 16 13:51:24 crc kubenswrapper[4812]: I0216 13:51:24.261457 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-9j44f" podStartSLOduration=5.435167953 podStartE2EDuration="36.261425115s" podCreationTimestamp="2026-02-16 13:50:48 +0000 UTC" firstStartedPulling="2026-02-16 13:50:51.953904961 +0000 UTC m=+1141.018235662" lastFinishedPulling="2026-02-16 13:51:22.780162123 +0000 UTC m=+1171.844492824" observedRunningTime="2026-02-16 13:51:24.261097016 +0000 UTC m=+1173.325427717" watchObservedRunningTime="2026-02-16 13:51:24.261425115 +0000 UTC m=+1173.325755806" Feb 16 13:51:26 crc kubenswrapper[4812]: I0216 13:51:26.287371 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-778459db5b-d66gm" Feb 16 13:51:28 crc kubenswrapper[4812]: I0216 13:51:28.300165 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-2vz54" event={"ID":"ce4b529c-2a7c-4919-8a99-78aa7eae9828","Type":"ContainerStarted","Data":"09417ca7f8bd4f851ff728e9e6510ac10f177e2cbc371d4aae2e4bbdaf669a7a"} Feb 16 13:51:28 crc kubenswrapper[4812]: I0216 13:51:28.300908 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-2vz54" Feb 16 13:51:28 crc kubenswrapper[4812]: I0216 13:51:28.301717 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-z7qxl" event={"ID":"e9326a1e-ab44-4168-96a4-d140c2f95a88","Type":"ContainerStarted","Data":"4c5bdcd7ef33c51c1f066c91d116520f4ce46c8a62d29adab9c55ad196c75cdc"} Feb 16 13:51:28 crc kubenswrapper[4812]: I0216 13:51:28.302087 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-z7qxl" Feb 16 13:51:28 crc kubenswrapper[4812]: I0216 13:51:28.303654 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5lnxb" event={"ID":"2e961da9-05f1-4eaf-ba3a-5d5bc14b7704","Type":"ContainerStarted","Data":"dfcfeaf13dab16927ec7d5a05d2fb326e5e1a3d5889d1f206f1c148780f1be14"} Feb 16 13:51:28 crc kubenswrapper[4812]: I0216 13:51:28.304075 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5lnxb" Feb 16 13:51:28 crc kubenswrapper[4812]: I0216 13:51:28.305632 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-84k8d" event={"ID":"f05e0adf-a8ed-41cf-9808-b10b0c36e48d","Type":"ContainerStarted","Data":"90a722afc3dbe7aeb77c5e95e9096c8de95c0f501eae634cf1d58289b5f0874e"} Feb 16 13:51:28 crc kubenswrapper[4812]: I0216 13:51:28.306029 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-84k8d" Feb 16 13:51:28 crc kubenswrapper[4812]: I0216 13:51:28.307853 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cdjllh" event={"ID":"06224f00-35c9-4aae-9dbc-c803abd7de2c","Type":"ContainerStarted","Data":"e92b0b97e8de88d9e10659389a5a6d09db1c7eb19e7278251c2948076840a483"} Feb 16 13:51:28 crc kubenswrapper[4812]: I0216 13:51:28.308066 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cdjllh" Feb 16 13:51:28 crc kubenswrapper[4812]: I0216 13:51:28.438215 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cdjllh" podStartSLOduration=31.906752897 podStartE2EDuration="40.438199627s" podCreationTimestamp="2026-02-16 13:50:48 +0000 UTC" firstStartedPulling="2026-02-16 13:51:18.864884629 +0000 UTC m=+1167.929215330" lastFinishedPulling="2026-02-16 13:51:27.396331359 +0000 UTC m=+1176.460662060" observedRunningTime="2026-02-16 13:51:28.433409999 +0000 UTC m=+1177.497740690" watchObservedRunningTime="2026-02-16 13:51:28.438199627 +0000 UTC m=+1177.502530328" Feb 16 13:51:28 crc kubenswrapper[4812]: I0216 13:51:28.440105 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-2vz54" podStartSLOduration=5.03812664 podStartE2EDuration="40.440097462s" podCreationTimestamp="2026-02-16 13:50:48 +0000 UTC" firstStartedPulling="2026-02-16 13:50:51.992893685 +0000 UTC m=+1141.057224386" lastFinishedPulling="2026-02-16 13:51:27.394864507 +0000 UTC m=+1176.459195208" observedRunningTime="2026-02-16 13:51:28.349733247 +0000 UTC m=+1177.414063938" watchObservedRunningTime="2026-02-16 13:51:28.440097462 +0000 UTC m=+1177.504428153" Feb 16 13:51:28 crc kubenswrapper[4812]: I0216 13:51:28.487469 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-84k8d" podStartSLOduration=5.411926402 podStartE2EDuration="40.487452496s" podCreationTimestamp="2026-02-16 13:50:48 +0000 UTC" firstStartedPulling="2026-02-16 13:50:52.321967209 +0000 UTC m=+1141.386297910" lastFinishedPulling="2026-02-16 13:51:27.397493303 +0000 UTC m=+1176.461824004" observedRunningTime="2026-02-16 13:51:28.482099622 +0000 UTC m=+1177.546430323" watchObservedRunningTime="2026-02-16 13:51:28.487452496 +0000 UTC m=+1177.551783197" Feb 16 13:51:28 crc kubenswrapper[4812]: I0216 13:51:28.655162 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-z7qxl" podStartSLOduration=32.284295148 podStartE2EDuration="40.655135989s" podCreationTimestamp="2026-02-16 13:50:48 +0000 UTC" firstStartedPulling="2026-02-16 13:51:19.026037084 +0000 UTC m=+1168.090367785" lastFinishedPulling="2026-02-16 13:51:27.396877925 +0000 UTC m=+1176.461208626" observedRunningTime="2026-02-16 13:51:28.641189917 +0000 UTC m=+1177.705520618" watchObservedRunningTime="2026-02-16 13:51:28.655135989 +0000 UTC m=+1177.719466700" Feb 16 13:51:28 crc kubenswrapper[4812]: I0216 13:51:28.677953 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5lnxb" podStartSLOduration=5.195106524 podStartE2EDuration="40.677930026s" podCreationTimestamp="2026-02-16 13:50:48 +0000 UTC" firstStartedPulling="2026-02-16 13:50:51.917530873 +0000 UTC m=+1140.981861574" lastFinishedPulling="2026-02-16 13:51:27.400354375 +0000 UTC m=+1176.464685076" observedRunningTime="2026-02-16 13:51:28.67146574 +0000 UTC m=+1177.735796451" watchObservedRunningTime="2026-02-16 13:51:28.677930026 +0000 UTC m=+1177.742260727" Feb 16 13:51:29 crc kubenswrapper[4812]: I0216 13:51:29.218545 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-ckx77" Feb 16 13:51:29 crc kubenswrapper[4812]: I0216 13:51:29.219186 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xsnnh" Feb 16 13:51:29 crc kubenswrapper[4812]: I0216 13:51:29.220045 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-8zb8t" Feb 16 13:51:29 crc kubenswrapper[4812]: I0216 13:51:29.220219 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-6vps7" Feb 16 13:51:29 crc kubenswrapper[4812]: I0216 13:51:29.222419 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rtgvj" Feb 16 13:51:29 crc kubenswrapper[4812]: I0216 13:51:29.248381 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ts4nk" Feb 16 13:51:29 crc kubenswrapper[4812]: I0216 13:51:29.289904 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-ggbhg" Feb 16 13:51:29 crc kubenswrapper[4812]: I0216 13:51:29.322319 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-c6zzj" Feb 16 13:51:29 crc kubenswrapper[4812]: I0216 13:51:29.328820 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-9j44f" Feb 16 13:51:29 crc kubenswrapper[4812]: I0216 13:51:29.330222 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-sh8cv" event={"ID":"eb78077d-7a72-4293-a0bf-8f7ce62aad8d","Type":"ContainerStarted","Data":"ca3dbd44b0c897c39d708a39f3f5768421f1c033a236aec35bbb5b155a8bbcbd"} Feb 16 13:51:29 crc kubenswrapper[4812]: I0216 13:51:29.330761 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-sh8cv" Feb 16 13:51:29 crc kubenswrapper[4812]: I0216 13:51:29.332715 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7sb4m" event={"ID":"0c6d2754-f4e2-497a-aa47-aa568aa9805c","Type":"ContainerStarted","Data":"11c866489c8600351059f7eae56439827a739c8a06a9e4ddd44d6a82a074f1b7"} Feb 16 13:51:29 crc kubenswrapper[4812]: I0216 13:51:29.428817 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7sb4m" Feb 16 13:51:29 crc kubenswrapper[4812]: I0216 13:51:29.674232 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-c5l89" Feb 16 13:51:29 crc kubenswrapper[4812]: I0216 13:51:29.679131 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-sh8cv" podStartSLOduration=5.679480675 podStartE2EDuration="41.679117052s" podCreationTimestamp="2026-02-16 13:50:48 +0000 UTC" firstStartedPulling="2026-02-16 13:50:52.323556065 +0000 UTC m=+1141.387886766" lastFinishedPulling="2026-02-16 13:51:28.323192452 +0000 UTC m=+1177.387523143" observedRunningTime="2026-02-16 13:51:29.676753314 +0000 UTC m=+1178.741084015" watchObservedRunningTime="2026-02-16 13:51:29.679117052 +0000 UTC m=+1178.743447753" Feb 16 13:51:29 crc kubenswrapper[4812]: I0216 13:51:29.746996 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7sb4m" podStartSLOduration=5.346834148 podStartE2EDuration="41.746934047s" podCreationTimestamp="2026-02-16 13:50:48 +0000 UTC" firstStartedPulling="2026-02-16 13:50:51.922416974 +0000 UTC m=+1140.986747675" lastFinishedPulling="2026-02-16 13:51:28.322516883 +0000 UTC m=+1177.386847574" observedRunningTime="2026-02-16 13:51:29.741275724 +0000 UTC m=+1178.805606445" watchObservedRunningTime="2026-02-16 13:51:29.746934047 +0000 UTC m=+1178.811264748" Feb 16 13:51:30 crc kubenswrapper[4812]: I0216 13:51:30.598823 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-866896b95f-8plmx" Feb 16 13:51:30 crc kubenswrapper[4812]: I0216 13:51:30.598868 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-lwc2x" Feb 16 13:51:30 crc kubenswrapper[4812]: I0216 13:51:30.754794 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-mpb9g" Feb 16 13:51:30 crc kubenswrapper[4812]: I0216 13:51:30.911421 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-tzzsc" Feb 16 13:51:35 crc kubenswrapper[4812]: I0216 13:51:35.051308 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-z7qxl" Feb 16 13:51:35 crc kubenswrapper[4812]: I0216 13:51:35.433506 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cdjllh" Feb 16 13:51:39 crc kubenswrapper[4812]: I0216 13:51:39.101910 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5lnxb" Feb 16 13:51:39 crc kubenswrapper[4812]: I0216 13:51:39.259622 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-sh8cv" Feb 16 13:51:39 crc kubenswrapper[4812]: I0216 13:51:39.430703 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7sb4m" Feb 16 13:51:39 crc kubenswrapper[4812]: I0216 13:51:39.592107 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-2vz54" Feb 16 13:51:40 crc kubenswrapper[4812]: I0216 13:51:40.131375 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-84k8d" Feb 16 13:51:58 crc kubenswrapper[4812]: I0216 13:51:58.950751 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-gk4pg"] Feb 16 13:51:58 crc kubenswrapper[4812]: I0216 13:51:58.959760 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-gk4pg" Feb 16 13:51:58 crc kubenswrapper[4812]: I0216 13:51:58.962482 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 16 13:51:58 crc kubenswrapper[4812]: I0216 13:51:58.963619 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-mqwlj" Feb 16 13:51:58 crc kubenswrapper[4812]: I0216 13:51:58.963680 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 16 13:51:58 crc kubenswrapper[4812]: I0216 13:51:58.966415 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 16 13:51:58 crc kubenswrapper[4812]: I0216 13:51:58.969400 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-gk4pg"] Feb 16 13:51:59 crc kubenswrapper[4812]: I0216 13:51:59.062478 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vr5nz"] Feb 16 13:51:59 crc kubenswrapper[4812]: I0216 13:51:59.063353 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d03c6dc6-9a9e-4bce-9b7b-96ebddab6f48-config\") pod \"dnsmasq-dns-675f4bcbfc-gk4pg\" (UID: \"d03c6dc6-9a9e-4bce-9b7b-96ebddab6f48\") " pod="openstack/dnsmasq-dns-675f4bcbfc-gk4pg" Feb 16 13:51:59 crc kubenswrapper[4812]: I0216 13:51:59.063479 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq4bt\" (UniqueName: \"kubernetes.io/projected/d03c6dc6-9a9e-4bce-9b7b-96ebddab6f48-kube-api-access-nq4bt\") pod \"dnsmasq-dns-675f4bcbfc-gk4pg\" (UID: \"d03c6dc6-9a9e-4bce-9b7b-96ebddab6f48\") " pod="openstack/dnsmasq-dns-675f4bcbfc-gk4pg" Feb 16 13:51:59 crc kubenswrapper[4812]: I0216 13:51:59.064622 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-vr5nz" Feb 16 13:51:59 crc kubenswrapper[4812]: I0216 13:51:59.067398 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 16 13:51:59 crc kubenswrapper[4812]: I0216 13:51:59.081117 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vr5nz"] Feb 16 13:51:59 crc kubenswrapper[4812]: I0216 13:51:59.164769 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/376ebce7-276d-48c2-8b87-9d3389fd60f4-config\") pod \"dnsmasq-dns-78dd6ddcc-vr5nz\" (UID: \"376ebce7-276d-48c2-8b87-9d3389fd60f4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vr5nz" Feb 16 13:51:59 crc kubenswrapper[4812]: I0216 13:51:59.164835 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq4bt\" (UniqueName: \"kubernetes.io/projected/d03c6dc6-9a9e-4bce-9b7b-96ebddab6f48-kube-api-access-nq4bt\") pod \"dnsmasq-dns-675f4bcbfc-gk4pg\" (UID: \"d03c6dc6-9a9e-4bce-9b7b-96ebddab6f48\") " pod="openstack/dnsmasq-dns-675f4bcbfc-gk4pg" Feb 16 13:51:59 crc kubenswrapper[4812]: I0216 13:51:59.164893 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/376ebce7-276d-48c2-8b87-9d3389fd60f4-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-vr5nz\" (UID: \"376ebce7-276d-48c2-8b87-9d3389fd60f4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vr5nz" Feb 16 13:51:59 crc kubenswrapper[4812]: I0216 13:51:59.164928 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d03c6dc6-9a9e-4bce-9b7b-96ebddab6f48-config\") pod \"dnsmasq-dns-675f4bcbfc-gk4pg\" (UID: \"d03c6dc6-9a9e-4bce-9b7b-96ebddab6f48\") " pod="openstack/dnsmasq-dns-675f4bcbfc-gk4pg" Feb 16 13:51:59 crc kubenswrapper[4812]: I0216 13:51:59.164974 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h928m\" (UniqueName: \"kubernetes.io/projected/376ebce7-276d-48c2-8b87-9d3389fd60f4-kube-api-access-h928m\") pod \"dnsmasq-dns-78dd6ddcc-vr5nz\" (UID: \"376ebce7-276d-48c2-8b87-9d3389fd60f4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vr5nz" Feb 16 13:51:59 crc kubenswrapper[4812]: I0216 13:51:59.165867 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d03c6dc6-9a9e-4bce-9b7b-96ebddab6f48-config\") pod \"dnsmasq-dns-675f4bcbfc-gk4pg\" (UID: \"d03c6dc6-9a9e-4bce-9b7b-96ebddab6f48\") " pod="openstack/dnsmasq-dns-675f4bcbfc-gk4pg" Feb 16 13:51:59 crc kubenswrapper[4812]: I0216 13:51:59.196274 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq4bt\" (UniqueName: \"kubernetes.io/projected/d03c6dc6-9a9e-4bce-9b7b-96ebddab6f48-kube-api-access-nq4bt\") pod \"dnsmasq-dns-675f4bcbfc-gk4pg\" (UID: \"d03c6dc6-9a9e-4bce-9b7b-96ebddab6f48\") " pod="openstack/dnsmasq-dns-675f4bcbfc-gk4pg" Feb 16 13:51:59 crc kubenswrapper[4812]: I0216 13:51:59.266500 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/376ebce7-276d-48c2-8b87-9d3389fd60f4-config\") pod \"dnsmasq-dns-78dd6ddcc-vr5nz\" (UID: \"376ebce7-276d-48c2-8b87-9d3389fd60f4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vr5nz" Feb 16 13:51:59 crc kubenswrapper[4812]: I0216 13:51:59.266594 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/376ebce7-276d-48c2-8b87-9d3389fd60f4-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-vr5nz\" (UID: \"376ebce7-276d-48c2-8b87-9d3389fd60f4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vr5nz" Feb 16 13:51:59 crc kubenswrapper[4812]: I0216 13:51:59.266652 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h928m\" (UniqueName: \"kubernetes.io/projected/376ebce7-276d-48c2-8b87-9d3389fd60f4-kube-api-access-h928m\") pod \"dnsmasq-dns-78dd6ddcc-vr5nz\" (UID: \"376ebce7-276d-48c2-8b87-9d3389fd60f4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vr5nz" Feb 16 13:51:59 crc kubenswrapper[4812]: I0216 13:51:59.267695 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/376ebce7-276d-48c2-8b87-9d3389fd60f4-config\") pod \"dnsmasq-dns-78dd6ddcc-vr5nz\" (UID: \"376ebce7-276d-48c2-8b87-9d3389fd60f4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vr5nz" Feb 16 13:51:59 crc kubenswrapper[4812]: I0216 13:51:59.267699 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/376ebce7-276d-48c2-8b87-9d3389fd60f4-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-vr5nz\" (UID: \"376ebce7-276d-48c2-8b87-9d3389fd60f4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vr5nz" Feb 16 13:51:59 crc kubenswrapper[4812]: I0216 13:51:59.286772 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h928m\" (UniqueName: \"kubernetes.io/projected/376ebce7-276d-48c2-8b87-9d3389fd60f4-kube-api-access-h928m\") pod \"dnsmasq-dns-78dd6ddcc-vr5nz\" (UID: \"376ebce7-276d-48c2-8b87-9d3389fd60f4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vr5nz" Feb 16 13:51:59 crc kubenswrapper[4812]: I0216 13:51:59.289722 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-gk4pg" Feb 16 13:51:59 crc kubenswrapper[4812]: I0216 13:51:59.382676 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-vr5nz" Feb 16 13:51:59 crc kubenswrapper[4812]: I0216 13:51:59.789766 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vr5nz"] Feb 16 13:51:59 crc kubenswrapper[4812]: I0216 13:51:59.892824 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-gk4pg"] Feb 16 13:51:59 crc kubenswrapper[4812]: W0216 13:51:59.897092 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd03c6dc6_9a9e_4bce_9b7b_96ebddab6f48.slice/crio-736739f7f43c1bb3b700195b12ef7e6dc7aab7d8489306d79037323e7af93bb9 WatchSource:0}: Error finding container 736739f7f43c1bb3b700195b12ef7e6dc7aab7d8489306d79037323e7af93bb9: Status 404 returned error can't find the container with id 736739f7f43c1bb3b700195b12ef7e6dc7aab7d8489306d79037323e7af93bb9 Feb 16 13:52:00 crc kubenswrapper[4812]: I0216 13:52:00.780331 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-vr5nz" event={"ID":"376ebce7-276d-48c2-8b87-9d3389fd60f4","Type":"ContainerStarted","Data":"770fa4bbb625eb4bf247d0526b38347292ebe2af151e039c8fcf12f3412d23f3"} Feb 16 13:52:00 crc kubenswrapper[4812]: I0216 13:52:00.782970 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-gk4pg" event={"ID":"d03c6dc6-9a9e-4bce-9b7b-96ebddab6f48","Type":"ContainerStarted","Data":"736739f7f43c1bb3b700195b12ef7e6dc7aab7d8489306d79037323e7af93bb9"} Feb 16 13:52:01 crc kubenswrapper[4812]: I0216 13:52:01.967677 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-gk4pg"] Feb 16 13:52:02 crc kubenswrapper[4812]: I0216 13:52:02.011246 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5d2mw"] Feb 16 13:52:02 crc kubenswrapper[4812]: I0216 13:52:02.012810 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-5d2mw" Feb 16 13:52:02 crc kubenswrapper[4812]: I0216 13:52:02.019116 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5d2mw"] Feb 16 13:52:02 crc kubenswrapper[4812]: I0216 13:52:02.145951 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e9d026d-fcd8-49b3-8268-8a9e59f077d0-dns-svc\") pod \"dnsmasq-dns-666b6646f7-5d2mw\" (UID: \"5e9d026d-fcd8-49b3-8268-8a9e59f077d0\") " pod="openstack/dnsmasq-dns-666b6646f7-5d2mw" Feb 16 13:52:02 crc kubenswrapper[4812]: I0216 13:52:02.146143 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prsk4\" (UniqueName: \"kubernetes.io/projected/5e9d026d-fcd8-49b3-8268-8a9e59f077d0-kube-api-access-prsk4\") pod \"dnsmasq-dns-666b6646f7-5d2mw\" (UID: \"5e9d026d-fcd8-49b3-8268-8a9e59f077d0\") " pod="openstack/dnsmasq-dns-666b6646f7-5d2mw" Feb 16 13:52:02 crc kubenswrapper[4812]: I0216 13:52:02.146601 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e9d026d-fcd8-49b3-8268-8a9e59f077d0-config\") pod \"dnsmasq-dns-666b6646f7-5d2mw\" (UID: \"5e9d026d-fcd8-49b3-8268-8a9e59f077d0\") " pod="openstack/dnsmasq-dns-666b6646f7-5d2mw" Feb 16 13:52:02 crc kubenswrapper[4812]: I0216 13:52:02.247702 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prsk4\" (UniqueName: \"kubernetes.io/projected/5e9d026d-fcd8-49b3-8268-8a9e59f077d0-kube-api-access-prsk4\") pod \"dnsmasq-dns-666b6646f7-5d2mw\" (UID: \"5e9d026d-fcd8-49b3-8268-8a9e59f077d0\") " pod="openstack/dnsmasq-dns-666b6646f7-5d2mw" Feb 16 13:52:02 crc kubenswrapper[4812]: I0216 13:52:02.247805 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e9d026d-fcd8-49b3-8268-8a9e59f077d0-config\") pod \"dnsmasq-dns-666b6646f7-5d2mw\" (UID: \"5e9d026d-fcd8-49b3-8268-8a9e59f077d0\") " pod="openstack/dnsmasq-dns-666b6646f7-5d2mw" Feb 16 13:52:02 crc kubenswrapper[4812]: I0216 13:52:02.247851 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e9d026d-fcd8-49b3-8268-8a9e59f077d0-dns-svc\") pod \"dnsmasq-dns-666b6646f7-5d2mw\" (UID: \"5e9d026d-fcd8-49b3-8268-8a9e59f077d0\") " pod="openstack/dnsmasq-dns-666b6646f7-5d2mw" Feb 16 13:52:02 crc kubenswrapper[4812]: I0216 13:52:02.249214 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e9d026d-fcd8-49b3-8268-8a9e59f077d0-config\") pod \"dnsmasq-dns-666b6646f7-5d2mw\" (UID: \"5e9d026d-fcd8-49b3-8268-8a9e59f077d0\") " pod="openstack/dnsmasq-dns-666b6646f7-5d2mw" Feb 16 13:52:02 crc kubenswrapper[4812]: I0216 13:52:02.249517 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e9d026d-fcd8-49b3-8268-8a9e59f077d0-dns-svc\") pod \"dnsmasq-dns-666b6646f7-5d2mw\" (UID: \"5e9d026d-fcd8-49b3-8268-8a9e59f077d0\") " pod="openstack/dnsmasq-dns-666b6646f7-5d2mw" Feb 16 13:52:02 crc kubenswrapper[4812]: I0216 13:52:02.284426 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prsk4\" (UniqueName: \"kubernetes.io/projected/5e9d026d-fcd8-49b3-8268-8a9e59f077d0-kube-api-access-prsk4\") pod \"dnsmasq-dns-666b6646f7-5d2mw\" (UID: \"5e9d026d-fcd8-49b3-8268-8a9e59f077d0\") " pod="openstack/dnsmasq-dns-666b6646f7-5d2mw" Feb 16 13:52:02 crc kubenswrapper[4812]: I0216 13:52:02.345617 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-5d2mw" Feb 16 13:52:02 crc kubenswrapper[4812]: I0216 13:52:02.402253 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vr5nz"] Feb 16 13:52:02 crc kubenswrapper[4812]: I0216 13:52:02.428473 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gl6hc"] Feb 16 13:52:02 crc kubenswrapper[4812]: I0216 13:52:02.429946 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-gl6hc" Feb 16 13:52:02 crc kubenswrapper[4812]: I0216 13:52:02.473759 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gl6hc"] Feb 16 13:52:02 crc kubenswrapper[4812]: I0216 13:52:02.558990 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3-config\") pod \"dnsmasq-dns-57d769cc4f-gl6hc\" (UID: \"1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3\") " pod="openstack/dnsmasq-dns-57d769cc4f-gl6hc" Feb 16 13:52:02 crc kubenswrapper[4812]: I0216 13:52:02.559079 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-gl6hc\" (UID: \"1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3\") " pod="openstack/dnsmasq-dns-57d769cc4f-gl6hc" Feb 16 13:52:02 crc kubenswrapper[4812]: I0216 13:52:02.559111 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chgwz\" (UniqueName: \"kubernetes.io/projected/1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3-kube-api-access-chgwz\") pod \"dnsmasq-dns-57d769cc4f-gl6hc\" (UID: \"1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3\") " pod="openstack/dnsmasq-dns-57d769cc4f-gl6hc" Feb 16 13:52:02 crc kubenswrapper[4812]: I0216 13:52:02.662349 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-gl6hc\" (UID: \"1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3\") " pod="openstack/dnsmasq-dns-57d769cc4f-gl6hc" Feb 16 13:52:02 crc kubenswrapper[4812]: I0216 13:52:02.662465 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chgwz\" (UniqueName: \"kubernetes.io/projected/1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3-kube-api-access-chgwz\") pod \"dnsmasq-dns-57d769cc4f-gl6hc\" (UID: \"1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3\") " pod="openstack/dnsmasq-dns-57d769cc4f-gl6hc" Feb 16 13:52:02 crc kubenswrapper[4812]: I0216 13:52:02.662878 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3-config\") pod \"dnsmasq-dns-57d769cc4f-gl6hc\" (UID: \"1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3\") " pod="openstack/dnsmasq-dns-57d769cc4f-gl6hc" Feb 16 13:52:02 crc kubenswrapper[4812]: I0216 13:52:02.664692 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3-config\") pod \"dnsmasq-dns-57d769cc4f-gl6hc\" (UID: \"1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3\") " pod="openstack/dnsmasq-dns-57d769cc4f-gl6hc" Feb 16 13:52:02 crc kubenswrapper[4812]: I0216 13:52:02.665723 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-gl6hc\" (UID: \"1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3\") " pod="openstack/dnsmasq-dns-57d769cc4f-gl6hc" Feb 16 13:52:02 crc kubenswrapper[4812]: I0216 13:52:02.686498 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chgwz\" (UniqueName: \"kubernetes.io/projected/1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3-kube-api-access-chgwz\") pod \"dnsmasq-dns-57d769cc4f-gl6hc\" (UID: \"1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3\") " pod="openstack/dnsmasq-dns-57d769cc4f-gl6hc" Feb 16 13:52:02 crc kubenswrapper[4812]: I0216 13:52:02.812096 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-gl6hc" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.113385 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5d2mw"] Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.226789 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.228224 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.230764 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.230960 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-6wzbn" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.231041 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.231048 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.231081 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.231041 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.231354 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.272023 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.388995 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1\") " pod="openstack/rabbitmq-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.389046 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1\") " pod="openstack/rabbitmq-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.389089 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1\") " pod="openstack/rabbitmq-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.389138 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1\") " pod="openstack/rabbitmq-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.389187 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1\") " pod="openstack/rabbitmq-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.389228 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnk78\" (UniqueName: \"kubernetes.io/projected/aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1-kube-api-access-wnk78\") pod \"rabbitmq-server-0\" (UID: \"aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1\") " pod="openstack/rabbitmq-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.389249 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1\") " pod="openstack/rabbitmq-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.389284 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1-config-data\") pod \"rabbitmq-server-0\" (UID: \"aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1\") " pod="openstack/rabbitmq-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.389350 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1\") " pod="openstack/rabbitmq-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.389405 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-935dbb29-751d-4384-93d1-1925a57c3108\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-935dbb29-751d-4384-93d1-1925a57c3108\") pod \"rabbitmq-server-0\" (UID: \"aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1\") " pod="openstack/rabbitmq-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.389432 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1\") " pod="openstack/rabbitmq-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.557021 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1\") " pod="openstack/rabbitmq-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.557098 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1\") " pod="openstack/rabbitmq-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.557139 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnk78\" (UniqueName: \"kubernetes.io/projected/aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1-kube-api-access-wnk78\") pod \"rabbitmq-server-0\" (UID: \"aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1\") " pod="openstack/rabbitmq-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.557165 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1\") " pod="openstack/rabbitmq-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.557200 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1-config-data\") pod \"rabbitmq-server-0\" (UID: \"aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1\") " pod="openstack/rabbitmq-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.557225 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1\") " pod="openstack/rabbitmq-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.557251 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-935dbb29-751d-4384-93d1-1925a57c3108\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-935dbb29-751d-4384-93d1-1925a57c3108\") pod \"rabbitmq-server-0\" (UID: \"aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1\") " pod="openstack/rabbitmq-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.557276 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1\") " pod="openstack/rabbitmq-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.557306 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1\") " pod="openstack/rabbitmq-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.557327 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1\") " pod="openstack/rabbitmq-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.557361 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1\") " pod="openstack/rabbitmq-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.558583 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1\") " pod="openstack/rabbitmq-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.560266 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1\") " pod="openstack/rabbitmq-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.565437 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1\") " pod="openstack/rabbitmq-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.566774 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1\") " pod="openstack/rabbitmq-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.572265 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1-config-data\") pod \"rabbitmq-server-0\" (UID: \"aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1\") " pod="openstack/rabbitmq-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.573087 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1\") " pod="openstack/rabbitmq-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.573119 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1\") " pod="openstack/rabbitmq-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.573894 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1\") " pod="openstack/rabbitmq-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.578651 4812 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.578714 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-935dbb29-751d-4384-93d1-1925a57c3108\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-935dbb29-751d-4384-93d1-1925a57c3108\") pod \"rabbitmq-server-0\" (UID: \"aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/79f88a8fe688dd14d5462c212ece58160043428df4e097a9e90d16a75af35b5f/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.581601 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1\") " pod="openstack/rabbitmq-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.587177 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.588007 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnk78\" (UniqueName: \"kubernetes.io/projected/aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1-kube-api-access-wnk78\") pod \"rabbitmq-server-0\" (UID: \"aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1\") " pod="openstack/rabbitmq-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.592666 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.595683 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.607114 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.607364 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.607534 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.607680 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-z7smn" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.607886 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.608048 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.608192 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.638384 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gl6hc"] Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.658059 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-935dbb29-751d-4384-93d1-1925a57c3108\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-935dbb29-751d-4384-93d1-1925a57c3108\") pod \"rabbitmq-server-0\" (UID: \"aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1\") " pod="openstack/rabbitmq-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.766159 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fd3d7247-6e34-48dc-b8ee-8f7a61c982ed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fd3d7247-6e34-48dc-b8ee-8f7a61c982ed\") pod \"rabbitmq-cell1-server-0\" (UID: \"f00dce1e-5743-4129-b78b-4a29351da7ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.766634 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f00dce1e-5743-4129-b78b-4a29351da7ed-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f00dce1e-5743-4129-b78b-4a29351da7ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.766763 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f00dce1e-5743-4129-b78b-4a29351da7ed-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f00dce1e-5743-4129-b78b-4a29351da7ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.766852 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f00dce1e-5743-4129-b78b-4a29351da7ed-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f00dce1e-5743-4129-b78b-4a29351da7ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.766944 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f00dce1e-5743-4129-b78b-4a29351da7ed-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f00dce1e-5743-4129-b78b-4a29351da7ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.767046 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f00dce1e-5743-4129-b78b-4a29351da7ed-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f00dce1e-5743-4129-b78b-4a29351da7ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.767125 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f00dce1e-5743-4129-b78b-4a29351da7ed-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f00dce1e-5743-4129-b78b-4a29351da7ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.767237 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d46b7\" (UniqueName: \"kubernetes.io/projected/f00dce1e-5743-4129-b78b-4a29351da7ed-kube-api-access-d46b7\") pod \"rabbitmq-cell1-server-0\" (UID: \"f00dce1e-5743-4129-b78b-4a29351da7ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.767430 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f00dce1e-5743-4129-b78b-4a29351da7ed-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f00dce1e-5743-4129-b78b-4a29351da7ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.767758 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f00dce1e-5743-4129-b78b-4a29351da7ed-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f00dce1e-5743-4129-b78b-4a29351da7ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.767850 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f00dce1e-5743-4129-b78b-4a29351da7ed-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f00dce1e-5743-4129-b78b-4a29351da7ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.869285 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f00dce1e-5743-4129-b78b-4a29351da7ed-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f00dce1e-5743-4129-b78b-4a29351da7ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.869352 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f00dce1e-5743-4129-b78b-4a29351da7ed-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f00dce1e-5743-4129-b78b-4a29351da7ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.869369 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f00dce1e-5743-4129-b78b-4a29351da7ed-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f00dce1e-5743-4129-b78b-4a29351da7ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.869398 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f00dce1e-5743-4129-b78b-4a29351da7ed-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f00dce1e-5743-4129-b78b-4a29351da7ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.869428 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f00dce1e-5743-4129-b78b-4a29351da7ed-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f00dce1e-5743-4129-b78b-4a29351da7ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.869474 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d46b7\" (UniqueName: \"kubernetes.io/projected/f00dce1e-5743-4129-b78b-4a29351da7ed-kube-api-access-d46b7\") pod \"rabbitmq-cell1-server-0\" (UID: \"f00dce1e-5743-4129-b78b-4a29351da7ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.869528 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f00dce1e-5743-4129-b78b-4a29351da7ed-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f00dce1e-5743-4129-b78b-4a29351da7ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.869570 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f00dce1e-5743-4129-b78b-4a29351da7ed-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f00dce1e-5743-4129-b78b-4a29351da7ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.869619 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f00dce1e-5743-4129-b78b-4a29351da7ed-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f00dce1e-5743-4129-b78b-4a29351da7ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.869648 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-fd3d7247-6e34-48dc-b8ee-8f7a61c982ed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fd3d7247-6e34-48dc-b8ee-8f7a61c982ed\") pod \"rabbitmq-cell1-server-0\" (UID: \"f00dce1e-5743-4129-b78b-4a29351da7ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.869674 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f00dce1e-5743-4129-b78b-4a29351da7ed-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f00dce1e-5743-4129-b78b-4a29351da7ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.869891 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f00dce1e-5743-4129-b78b-4a29351da7ed-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f00dce1e-5743-4129-b78b-4a29351da7ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.871071 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f00dce1e-5743-4129-b78b-4a29351da7ed-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f00dce1e-5743-4129-b78b-4a29351da7ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.891056 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f00dce1e-5743-4129-b78b-4a29351da7ed-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f00dce1e-5743-4129-b78b-4a29351da7ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.891405 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f00dce1e-5743-4129-b78b-4a29351da7ed-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f00dce1e-5743-4129-b78b-4a29351da7ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.892625 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f00dce1e-5743-4129-b78b-4a29351da7ed-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f00dce1e-5743-4129-b78b-4a29351da7ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.892796 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.897068 4812 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.897111 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-fd3d7247-6e34-48dc-b8ee-8f7a61c982ed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fd3d7247-6e34-48dc-b8ee-8f7a61c982ed\") pod \"rabbitmq-cell1-server-0\" (UID: \"f00dce1e-5743-4129-b78b-4a29351da7ed\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4bd88560ff3e79ddca1a4d559dce48ca454c42050f23cf184492d01882e831b6/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.916875 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f00dce1e-5743-4129-b78b-4a29351da7ed-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f00dce1e-5743-4129-b78b-4a29351da7ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.917693 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f00dce1e-5743-4129-b78b-4a29351da7ed-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f00dce1e-5743-4129-b78b-4a29351da7ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.919417 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d46b7\" (UniqueName: \"kubernetes.io/projected/f00dce1e-5743-4129-b78b-4a29351da7ed-kube-api-access-d46b7\") pod \"rabbitmq-cell1-server-0\" (UID: \"f00dce1e-5743-4129-b78b-4a29351da7ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.922959 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f00dce1e-5743-4129-b78b-4a29351da7ed-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f00dce1e-5743-4129-b78b-4a29351da7ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.926545 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f00dce1e-5743-4129-b78b-4a29351da7ed-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f00dce1e-5743-4129-b78b-4a29351da7ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.942380 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-5d2mw" event={"ID":"5e9d026d-fcd8-49b3-8268-8a9e59f077d0","Type":"ContainerStarted","Data":"109dc29adaf3c71cc8d97eb5e633ee083d819e31a8a8fd6f8c3fc15ea7a97657"} Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.942433 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-gl6hc" event={"ID":"1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3","Type":"ContainerStarted","Data":"462f8eb9a9230667b134c4446c906ab371db8b60cecce0b15e862a5b9b9556d0"} Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.950912 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-fd3d7247-6e34-48dc-b8ee-8f7a61c982ed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fd3d7247-6e34-48dc-b8ee-8f7a61c982ed\") pod \"rabbitmq-cell1-server-0\" (UID: \"f00dce1e-5743-4129-b78b-4a29351da7ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:52:03 crc kubenswrapper[4812]: I0216 13:52:03.987036 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:52:04 crc kubenswrapper[4812]: I0216 13:52:04.442377 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 16 13:52:04 crc kubenswrapper[4812]: I0216 13:52:04.446807 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 16 13:52:04 crc kubenswrapper[4812]: I0216 13:52:04.449941 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-v8rb6" Feb 16 13:52:04 crc kubenswrapper[4812]: I0216 13:52:04.450225 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 16 13:52:04 crc kubenswrapper[4812]: I0216 13:52:04.450800 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 16 13:52:04 crc kubenswrapper[4812]: I0216 13:52:04.451317 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 16 13:52:04 crc kubenswrapper[4812]: I0216 13:52:04.462710 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 16 13:52:04 crc kubenswrapper[4812]: I0216 13:52:04.476861 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 16 13:52:04 crc kubenswrapper[4812]: I0216 13:52:04.600415 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11179909-1e24-429d-9d33-e2c448e1cf6b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"11179909-1e24-429d-9d33-e2c448e1cf6b\") " pod="openstack/openstack-galera-0" Feb 16 13:52:04 crc kubenswrapper[4812]: I0216 13:52:04.600520 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/11179909-1e24-429d-9d33-e2c448e1cf6b-kolla-config\") pod \"openstack-galera-0\" (UID: \"11179909-1e24-429d-9d33-e2c448e1cf6b\") " pod="openstack/openstack-galera-0" Feb 16 13:52:04 crc kubenswrapper[4812]: I0216 13:52:04.600549 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rdjw\" (UniqueName: \"kubernetes.io/projected/11179909-1e24-429d-9d33-e2c448e1cf6b-kube-api-access-6rdjw\") pod \"openstack-galera-0\" (UID: \"11179909-1e24-429d-9d33-e2c448e1cf6b\") " pod="openstack/openstack-galera-0" Feb 16 13:52:04 crc kubenswrapper[4812]: I0216 13:52:04.600566 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/11179909-1e24-429d-9d33-e2c448e1cf6b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"11179909-1e24-429d-9d33-e2c448e1cf6b\") " pod="openstack/openstack-galera-0" Feb 16 13:52:04 crc kubenswrapper[4812]: I0216 13:52:04.600583 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11179909-1e24-429d-9d33-e2c448e1cf6b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"11179909-1e24-429d-9d33-e2c448e1cf6b\") " pod="openstack/openstack-galera-0" Feb 16 13:52:04 crc kubenswrapper[4812]: I0216 13:52:04.600775 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/11179909-1e24-429d-9d33-e2c448e1cf6b-config-data-default\") pod \"openstack-galera-0\" (UID: \"11179909-1e24-429d-9d33-e2c448e1cf6b\") " pod="openstack/openstack-galera-0" Feb 16 13:52:04 crc kubenswrapper[4812]: I0216 13:52:04.600850 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c99ac855-9659-49a2-8f19-a7aea7e561e9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c99ac855-9659-49a2-8f19-a7aea7e561e9\") pod \"openstack-galera-0\" (UID: \"11179909-1e24-429d-9d33-e2c448e1cf6b\") " pod="openstack/openstack-galera-0" Feb 16 13:52:04 crc kubenswrapper[4812]: I0216 13:52:04.600930 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/11179909-1e24-429d-9d33-e2c448e1cf6b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"11179909-1e24-429d-9d33-e2c448e1cf6b\") " pod="openstack/openstack-galera-0" Feb 16 13:52:04 crc kubenswrapper[4812]: I0216 13:52:04.650211 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 13:52:04 crc kubenswrapper[4812]: W0216 13:52:04.700807 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa9e7fbb_f7a7_4a2f_91cc_77a4d1cd24f1.slice/crio-5ae82317ce7903421abb95cab90f4528c054ebae18d6edf422b2423d7eb55e3e WatchSource:0}: Error finding container 5ae82317ce7903421abb95cab90f4528c054ebae18d6edf422b2423d7eb55e3e: Status 404 returned error can't find the container with id 5ae82317ce7903421abb95cab90f4528c054ebae18d6edf422b2423d7eb55e3e Feb 16 13:52:04 crc kubenswrapper[4812]: I0216 13:52:04.706843 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/11179909-1e24-429d-9d33-e2c448e1cf6b-kolla-config\") pod \"openstack-galera-0\" (UID: \"11179909-1e24-429d-9d33-e2c448e1cf6b\") " pod="openstack/openstack-galera-0" Feb 16 13:52:04 crc kubenswrapper[4812]: I0216 13:52:04.706908 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rdjw\" (UniqueName: \"kubernetes.io/projected/11179909-1e24-429d-9d33-e2c448e1cf6b-kube-api-access-6rdjw\") pod \"openstack-galera-0\" (UID: \"11179909-1e24-429d-9d33-e2c448e1cf6b\") " pod="openstack/openstack-galera-0" Feb 16 13:52:04 crc kubenswrapper[4812]: I0216 13:52:04.706936 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11179909-1e24-429d-9d33-e2c448e1cf6b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"11179909-1e24-429d-9d33-e2c448e1cf6b\") " pod="openstack/openstack-galera-0" Feb 16 13:52:04 crc kubenswrapper[4812]: I0216 13:52:04.706955 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/11179909-1e24-429d-9d33-e2c448e1cf6b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"11179909-1e24-429d-9d33-e2c448e1cf6b\") " pod="openstack/openstack-galera-0" Feb 16 13:52:04 crc kubenswrapper[4812]: I0216 13:52:04.706995 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/11179909-1e24-429d-9d33-e2c448e1cf6b-config-data-default\") pod \"openstack-galera-0\" (UID: \"11179909-1e24-429d-9d33-e2c448e1cf6b\") " pod="openstack/openstack-galera-0" Feb 16 13:52:04 crc kubenswrapper[4812]: I0216 13:52:04.707025 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c99ac855-9659-49a2-8f19-a7aea7e561e9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c99ac855-9659-49a2-8f19-a7aea7e561e9\") pod \"openstack-galera-0\" (UID: \"11179909-1e24-429d-9d33-e2c448e1cf6b\") " pod="openstack/openstack-galera-0" Feb 16 13:52:04 crc kubenswrapper[4812]: I0216 13:52:04.707074 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/11179909-1e24-429d-9d33-e2c448e1cf6b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"11179909-1e24-429d-9d33-e2c448e1cf6b\") " pod="openstack/openstack-galera-0" Feb 16 13:52:04 crc kubenswrapper[4812]: I0216 13:52:04.707118 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11179909-1e24-429d-9d33-e2c448e1cf6b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"11179909-1e24-429d-9d33-e2c448e1cf6b\") " pod="openstack/openstack-galera-0" Feb 16 13:52:04 crc kubenswrapper[4812]: I0216 13:52:04.709363 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/11179909-1e24-429d-9d33-e2c448e1cf6b-config-data-default\") pod \"openstack-galera-0\" (UID: \"11179909-1e24-429d-9d33-e2c448e1cf6b\") " pod="openstack/openstack-galera-0" Feb 16 13:52:04 crc kubenswrapper[4812]: I0216 13:52:04.711709 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/11179909-1e24-429d-9d33-e2c448e1cf6b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"11179909-1e24-429d-9d33-e2c448e1cf6b\") " pod="openstack/openstack-galera-0" Feb 16 13:52:04 crc kubenswrapper[4812]: I0216 13:52:04.717076 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11179909-1e24-429d-9d33-e2c448e1cf6b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"11179909-1e24-429d-9d33-e2c448e1cf6b\") " pod="openstack/openstack-galera-0" Feb 16 13:52:04 crc kubenswrapper[4812]: I0216 13:52:04.718454 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/11179909-1e24-429d-9d33-e2c448e1cf6b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"11179909-1e24-429d-9d33-e2c448e1cf6b\") " pod="openstack/openstack-galera-0" Feb 16 13:52:04 crc kubenswrapper[4812]: I0216 13:52:04.718571 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 13:52:04 crc kubenswrapper[4812]: I0216 13:52:04.718834 4812 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 13:52:04 crc kubenswrapper[4812]: I0216 13:52:04.718869 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c99ac855-9659-49a2-8f19-a7aea7e561e9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c99ac855-9659-49a2-8f19-a7aea7e561e9\") pod \"openstack-galera-0\" (UID: \"11179909-1e24-429d-9d33-e2c448e1cf6b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/04d72b505dda920e1d65d88a6ef4aaf092e2dd51747c004c0c440261fc660238/globalmount\"" pod="openstack/openstack-galera-0" Feb 16 13:52:04 crc kubenswrapper[4812]: I0216 13:52:04.723044 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11179909-1e24-429d-9d33-e2c448e1cf6b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"11179909-1e24-429d-9d33-e2c448e1cf6b\") " pod="openstack/openstack-galera-0" Feb 16 13:52:04 crc kubenswrapper[4812]: I0216 13:52:04.731896 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/11179909-1e24-429d-9d33-e2c448e1cf6b-kolla-config\") pod \"openstack-galera-0\" (UID: \"11179909-1e24-429d-9d33-e2c448e1cf6b\") " pod="openstack/openstack-galera-0" Feb 16 13:52:04 crc kubenswrapper[4812]: I0216 13:52:04.746001 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rdjw\" (UniqueName: \"kubernetes.io/projected/11179909-1e24-429d-9d33-e2c448e1cf6b-kube-api-access-6rdjw\") pod \"openstack-galera-0\" (UID: \"11179909-1e24-429d-9d33-e2c448e1cf6b\") " pod="openstack/openstack-galera-0" Feb 16 13:52:04 crc kubenswrapper[4812]: W0216 13:52:04.768268 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf00dce1e_5743_4129_b78b_4a29351da7ed.slice/crio-6535b7da2cc421c7f0bc24770d79052d732475a4b7176549055cbfe9c6963d15 WatchSource:0}: Error finding container 6535b7da2cc421c7f0bc24770d79052d732475a4b7176549055cbfe9c6963d15: Status 404 returned error can't find the container with id 6535b7da2cc421c7f0bc24770d79052d732475a4b7176549055cbfe9c6963d15 Feb 16 13:52:04 crc kubenswrapper[4812]: I0216 13:52:04.773968 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c99ac855-9659-49a2-8f19-a7aea7e561e9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c99ac855-9659-49a2-8f19-a7aea7e561e9\") pod \"openstack-galera-0\" (UID: \"11179909-1e24-429d-9d33-e2c448e1cf6b\") " pod="openstack/openstack-galera-0" Feb 16 13:52:04 crc kubenswrapper[4812]: I0216 13:52:04.801516 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 16 13:52:04 crc kubenswrapper[4812]: I0216 13:52:04.978630 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f00dce1e-5743-4129-b78b-4a29351da7ed","Type":"ContainerStarted","Data":"6535b7da2cc421c7f0bc24770d79052d732475a4b7176549055cbfe9c6963d15"} Feb 16 13:52:04 crc kubenswrapper[4812]: I0216 13:52:04.994690 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1","Type":"ContainerStarted","Data":"5ae82317ce7903421abb95cab90f4528c054ebae18d6edf422b2423d7eb55e3e"} Feb 16 13:52:05 crc kubenswrapper[4812]: I0216 13:52:05.535149 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 16 13:52:05 crc kubenswrapper[4812]: W0216 13:52:05.579476 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11179909_1e24_429d_9d33_e2c448e1cf6b.slice/crio-022fd1445f047ac8e7325b1aab04c4dc23b30f7c480bb59bfce891afec4b82ee WatchSource:0}: Error finding container 022fd1445f047ac8e7325b1aab04c4dc23b30f7c480bb59bfce891afec4b82ee: Status 404 returned error can't find the container with id 022fd1445f047ac8e7325b1aab04c4dc23b30f7c480bb59bfce891afec4b82ee Feb 16 13:52:05 crc kubenswrapper[4812]: I0216 13:52:05.748047 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 13:52:05 crc kubenswrapper[4812]: I0216 13:52:05.749791 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 16 13:52:05 crc kubenswrapper[4812]: I0216 13:52:05.761896 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 13:52:05 crc kubenswrapper[4812]: I0216 13:52:05.789767 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-9lzpz" Feb 16 13:52:05 crc kubenswrapper[4812]: I0216 13:52:05.789934 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 16 13:52:05 crc kubenswrapper[4812]: I0216 13:52:05.790085 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 16 13:52:05 crc kubenswrapper[4812]: I0216 13:52:05.790204 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 16 13:52:05 crc kubenswrapper[4812]: I0216 13:52:05.927529 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 16 13:52:05 crc kubenswrapper[4812]: I0216 13:52:05.935598 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 16 13:52:05 crc kubenswrapper[4812]: I0216 13:52:05.938371 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-9tstp" Feb 16 13:52:05 crc kubenswrapper[4812]: I0216 13:52:05.939622 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 16 13:52:05 crc kubenswrapper[4812]: I0216 13:52:05.944353 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 16 13:52:05 crc kubenswrapper[4812]: I0216 13:52:05.944861 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:52:05 crc kubenswrapper[4812]: I0216 13:52:05.944920 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:52:05 crc kubenswrapper[4812]: I0216 13:52:05.944968 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:52:05 crc kubenswrapper[4812]: I0216 13:52:05.945020 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:52:05 crc kubenswrapper[4812]: I0216 13:52:05.945068 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5dd3a1c7-18da-43c7-ae12-6851f1a410d7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5dd3a1c7-18da-43c7-ae12-6851f1a410d7\") pod \"openstack-cell1-galera-0\" (UID: \"d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:52:05 crc kubenswrapper[4812]: I0216 13:52:05.945094 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:52:05 crc kubenswrapper[4812]: I0216 13:52:05.945123 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:52:05 crc kubenswrapper[4812]: I0216 13:52:05.945136 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 16 13:52:05 crc kubenswrapper[4812]: I0216 13:52:05.945161 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pj74\" (UniqueName: \"kubernetes.io/projected/d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7-kube-api-access-6pj74\") pod \"openstack-cell1-galera-0\" (UID: \"d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:52:06 crc kubenswrapper[4812]: I0216 13:52:06.036111 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"11179909-1e24-429d-9d33-e2c448e1cf6b","Type":"ContainerStarted","Data":"022fd1445f047ac8e7325b1aab04c4dc23b30f7c480bb59bfce891afec4b82ee"} Feb 16 13:52:06 crc kubenswrapper[4812]: I0216 13:52:06.046142 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:52:06 crc kubenswrapper[4812]: I0216 13:52:06.046218 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:52:06 crc kubenswrapper[4812]: I0216 13:52:06.046252 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:52:06 crc kubenswrapper[4812]: I0216 13:52:06.046273 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bshnh\" (UniqueName: \"kubernetes.io/projected/95382144-b401-41b0-bf26-8a5503df91f6-kube-api-access-bshnh\") pod \"memcached-0\" (UID: \"95382144-b401-41b0-bf26-8a5503df91f6\") " pod="openstack/memcached-0" Feb 16 13:52:06 crc kubenswrapper[4812]: I0216 13:52:06.046305 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95382144-b401-41b0-bf26-8a5503df91f6-config-data\") pod \"memcached-0\" (UID: \"95382144-b401-41b0-bf26-8a5503df91f6\") " pod="openstack/memcached-0" Feb 16 13:52:06 crc kubenswrapper[4812]: I0216 13:52:06.046367 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:52:06 crc kubenswrapper[4812]: I0216 13:52:06.046387 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5dd3a1c7-18da-43c7-ae12-6851f1a410d7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5dd3a1c7-18da-43c7-ae12-6851f1a410d7\") pod \"openstack-cell1-galera-0\" (UID: \"d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:52:06 crc kubenswrapper[4812]: I0216 13:52:06.046404 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:52:06 crc kubenswrapper[4812]: I0216 13:52:06.046433 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/95382144-b401-41b0-bf26-8a5503df91f6-kolla-config\") pod \"memcached-0\" (UID: \"95382144-b401-41b0-bf26-8a5503df91f6\") " pod="openstack/memcached-0" Feb 16 13:52:06 crc kubenswrapper[4812]: I0216 13:52:06.046473 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pj74\" (UniqueName: \"kubernetes.io/projected/d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7-kube-api-access-6pj74\") pod \"openstack-cell1-galera-0\" (UID: \"d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:52:06 crc kubenswrapper[4812]: I0216 13:52:06.046496 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/95382144-b401-41b0-bf26-8a5503df91f6-memcached-tls-certs\") pod \"memcached-0\" (UID: \"95382144-b401-41b0-bf26-8a5503df91f6\") " pod="openstack/memcached-0" Feb 16 13:52:06 crc kubenswrapper[4812]: I0216 13:52:06.046546 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95382144-b401-41b0-bf26-8a5503df91f6-combined-ca-bundle\") pod \"memcached-0\" (UID: \"95382144-b401-41b0-bf26-8a5503df91f6\") " pod="openstack/memcached-0" Feb 16 13:52:06 crc kubenswrapper[4812]: I0216 13:52:06.046585 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:52:06 crc kubenswrapper[4812]: I0216 13:52:06.047735 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:52:06 crc kubenswrapper[4812]: I0216 13:52:06.048596 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:52:06 crc kubenswrapper[4812]: I0216 13:52:06.049243 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:52:06 crc kubenswrapper[4812]: I0216 13:52:06.059605 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:52:06 crc kubenswrapper[4812]: I0216 13:52:06.064181 4812 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 13:52:06 crc kubenswrapper[4812]: I0216 13:52:06.064239 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5dd3a1c7-18da-43c7-ae12-6851f1a410d7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5dd3a1c7-18da-43c7-ae12-6851f1a410d7\") pod \"openstack-cell1-galera-0\" (UID: \"d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/292bf2caff2d2005bbc2957f3cf4b479f80e1cb9e1280143c8c9079e3e2e4bc6/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 16 13:52:06 crc kubenswrapper[4812]: I0216 13:52:06.105463 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pj74\" (UniqueName: \"kubernetes.io/projected/d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7-kube-api-access-6pj74\") pod \"openstack-cell1-galera-0\" (UID: \"d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:52:06 crc kubenswrapper[4812]: I0216 13:52:06.107419 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:52:06 crc kubenswrapper[4812]: I0216 13:52:06.107836 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:52:06 crc kubenswrapper[4812]: I0216 13:52:06.134276 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5dd3a1c7-18da-43c7-ae12-6851f1a410d7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5dd3a1c7-18da-43c7-ae12-6851f1a410d7\") pod \"openstack-cell1-galera-0\" (UID: \"d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 13:52:06 crc kubenswrapper[4812]: I0216 13:52:06.147521 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bshnh\" (UniqueName: \"kubernetes.io/projected/95382144-b401-41b0-bf26-8a5503df91f6-kube-api-access-bshnh\") pod \"memcached-0\" (UID: \"95382144-b401-41b0-bf26-8a5503df91f6\") " pod="openstack/memcached-0" Feb 16 13:52:06 crc kubenswrapper[4812]: I0216 13:52:06.147570 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95382144-b401-41b0-bf26-8a5503df91f6-config-data\") pod \"memcached-0\" (UID: \"95382144-b401-41b0-bf26-8a5503df91f6\") " pod="openstack/memcached-0" Feb 16 13:52:06 crc kubenswrapper[4812]: I0216 13:52:06.147630 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/95382144-b401-41b0-bf26-8a5503df91f6-kolla-config\") pod \"memcached-0\" (UID: \"95382144-b401-41b0-bf26-8a5503df91f6\") " pod="openstack/memcached-0" Feb 16 13:52:06 crc kubenswrapper[4812]: I0216 13:52:06.147653 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/95382144-b401-41b0-bf26-8a5503df91f6-memcached-tls-certs\") pod \"memcached-0\" (UID: \"95382144-b401-41b0-bf26-8a5503df91f6\") " pod="openstack/memcached-0" Feb 16 13:52:06 crc kubenswrapper[4812]: I0216 13:52:06.147690 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95382144-b401-41b0-bf26-8a5503df91f6-combined-ca-bundle\") pod \"memcached-0\" (UID: \"95382144-b401-41b0-bf26-8a5503df91f6\") " pod="openstack/memcached-0" Feb 16 13:52:06 crc kubenswrapper[4812]: I0216 13:52:06.149299 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95382144-b401-41b0-bf26-8a5503df91f6-config-data\") pod \"memcached-0\" (UID: \"95382144-b401-41b0-bf26-8a5503df91f6\") " pod="openstack/memcached-0" Feb 16 13:52:06 crc kubenswrapper[4812]: I0216 13:52:06.149532 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 16 13:52:06 crc kubenswrapper[4812]: I0216 13:52:06.149543 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/95382144-b401-41b0-bf26-8a5503df91f6-kolla-config\") pod \"memcached-0\" (UID: \"95382144-b401-41b0-bf26-8a5503df91f6\") " pod="openstack/memcached-0" Feb 16 13:52:06 crc kubenswrapper[4812]: I0216 13:52:06.155091 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/95382144-b401-41b0-bf26-8a5503df91f6-memcached-tls-certs\") pod \"memcached-0\" (UID: \"95382144-b401-41b0-bf26-8a5503df91f6\") " pod="openstack/memcached-0" Feb 16 13:52:06 crc kubenswrapper[4812]: I0216 13:52:06.160322 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95382144-b401-41b0-bf26-8a5503df91f6-combined-ca-bundle\") pod \"memcached-0\" (UID: \"95382144-b401-41b0-bf26-8a5503df91f6\") " pod="openstack/memcached-0" Feb 16 13:52:06 crc kubenswrapper[4812]: I0216 13:52:06.168896 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bshnh\" (UniqueName: \"kubernetes.io/projected/95382144-b401-41b0-bf26-8a5503df91f6-kube-api-access-bshnh\") pod \"memcached-0\" (UID: \"95382144-b401-41b0-bf26-8a5503df91f6\") " pod="openstack/memcached-0" Feb 16 13:52:06 crc kubenswrapper[4812]: I0216 13:52:06.276127 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 16 13:52:07 crc kubenswrapper[4812]: I0216 13:52:07.047325 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 13:52:07 crc kubenswrapper[4812]: I0216 13:52:07.392097 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 16 13:52:08 crc kubenswrapper[4812]: I0216 13:52:08.224022 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 13:52:08 crc kubenswrapper[4812]: I0216 13:52:08.227393 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 13:52:08 crc kubenswrapper[4812]: I0216 13:52:08.236736 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 13:52:08 crc kubenswrapper[4812]: I0216 13:52:08.248382 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-pr2x7" Feb 16 13:52:08 crc kubenswrapper[4812]: I0216 13:52:08.377634 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v9nt\" (UniqueName: \"kubernetes.io/projected/6039f662-e9ac-455c-b4da-9bcbe34e1396-kube-api-access-9v9nt\") pod \"kube-state-metrics-0\" (UID: \"6039f662-e9ac-455c-b4da-9bcbe34e1396\") " pod="openstack/kube-state-metrics-0" Feb 16 13:52:08 crc kubenswrapper[4812]: I0216 13:52:08.483224 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v9nt\" (UniqueName: \"kubernetes.io/projected/6039f662-e9ac-455c-b4da-9bcbe34e1396-kube-api-access-9v9nt\") pod \"kube-state-metrics-0\" (UID: \"6039f662-e9ac-455c-b4da-9bcbe34e1396\") " pod="openstack/kube-state-metrics-0" Feb 16 13:52:08 crc kubenswrapper[4812]: I0216 13:52:08.540800 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v9nt\" (UniqueName: \"kubernetes.io/projected/6039f662-e9ac-455c-b4da-9bcbe34e1396-kube-api-access-9v9nt\") pod \"kube-state-metrics-0\" (UID: \"6039f662-e9ac-455c-b4da-9bcbe34e1396\") " pod="openstack/kube-state-metrics-0" Feb 16 13:52:08 crc kubenswrapper[4812]: I0216 13:52:08.570969 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.268514 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.285985 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.302346 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/96cb02af-deed-4da5-96cf-28d69592caed-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"96cb02af-deed-4da5-96cf-28d69592caed\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.302428 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/96cb02af-deed-4da5-96cf-28d69592caed-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"96cb02af-deed-4da5-96cf-28d69592caed\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.302504 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/96cb02af-deed-4da5-96cf-28d69592caed-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"96cb02af-deed-4da5-96cf-28d69592caed\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.302540 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/96cb02af-deed-4da5-96cf-28d69592caed-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"96cb02af-deed-4da5-96cf-28d69592caed\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.302591 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/96cb02af-deed-4da5-96cf-28d69592caed-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"96cb02af-deed-4da5-96cf-28d69592caed\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.302616 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/96cb02af-deed-4da5-96cf-28d69592caed-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"96cb02af-deed-4da5-96cf-28d69592caed\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.302658 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvmrs\" (UniqueName: \"kubernetes.io/projected/96cb02af-deed-4da5-96cf-28d69592caed-kube-api-access-mvmrs\") pod \"alertmanager-metric-storage-0\" (UID: \"96cb02af-deed-4da5-96cf-28d69592caed\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.302997 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.303192 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.306326 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.306741 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.306792 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-79hmr" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.490565 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/96cb02af-deed-4da5-96cf-28d69592caed-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"96cb02af-deed-4da5-96cf-28d69592caed\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.490637 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/96cb02af-deed-4da5-96cf-28d69592caed-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"96cb02af-deed-4da5-96cf-28d69592caed\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.490737 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvmrs\" (UniqueName: \"kubernetes.io/projected/96cb02af-deed-4da5-96cf-28d69592caed-kube-api-access-mvmrs\") pod \"alertmanager-metric-storage-0\" (UID: \"96cb02af-deed-4da5-96cf-28d69592caed\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.491497 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/96cb02af-deed-4da5-96cf-28d69592caed-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"96cb02af-deed-4da5-96cf-28d69592caed\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.491579 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/96cb02af-deed-4da5-96cf-28d69592caed-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"96cb02af-deed-4da5-96cf-28d69592caed\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.491649 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/96cb02af-deed-4da5-96cf-28d69592caed-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"96cb02af-deed-4da5-96cf-28d69592caed\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.491680 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/96cb02af-deed-4da5-96cf-28d69592caed-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"96cb02af-deed-4da5-96cf-28d69592caed\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.519898 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/96cb02af-deed-4da5-96cf-28d69592caed-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"96cb02af-deed-4da5-96cf-28d69592caed\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.533659 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/96cb02af-deed-4da5-96cf-28d69592caed-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"96cb02af-deed-4da5-96cf-28d69592caed\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.535636 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/96cb02af-deed-4da5-96cf-28d69592caed-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"96cb02af-deed-4da5-96cf-28d69592caed\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.544032 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/96cb02af-deed-4da5-96cf-28d69592caed-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"96cb02af-deed-4da5-96cf-28d69592caed\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.544099 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/96cb02af-deed-4da5-96cf-28d69592caed-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"96cb02af-deed-4da5-96cf-28d69592caed\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.544436 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.554623 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvmrs\" (UniqueName: \"kubernetes.io/projected/96cb02af-deed-4da5-96cf-28d69592caed-kube-api-access-mvmrs\") pod \"alertmanager-metric-storage-0\" (UID: \"96cb02af-deed-4da5-96cf-28d69592caed\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.555181 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/96cb02af-deed-4da5-96cf-28d69592caed-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"96cb02af-deed-4da5-96cf-28d69592caed\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.558136 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.599889 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.600005 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.603691 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.604124 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.604510 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.604601 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-m7q56" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.604683 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.604900 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.607655 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.612661 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.703603 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.705235 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e02a9868-e12c-4a65-9ba5-4a5965131b5b-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.705577 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e02a9868-e12c-4a65-9ba5-4a5965131b5b-config\") pod \"prometheus-metric-storage-0\" (UID: \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.705822 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e02a9868-e12c-4a65-9ba5-4a5965131b5b-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.705984 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e02a9868-e12c-4a65-9ba5-4a5965131b5b-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.706113 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-149889e2-65b8-4663-a4ac-a48e48736700\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-149889e2-65b8-4663-a4ac-a48e48736700\") pod \"prometheus-metric-storage-0\" (UID: \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.706149 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e02a9868-e12c-4a65-9ba5-4a5965131b5b-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.706172 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkzm4\" (UniqueName: \"kubernetes.io/projected/e02a9868-e12c-4a65-9ba5-4a5965131b5b-kube-api-access-nkzm4\") pod \"prometheus-metric-storage-0\" (UID: \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.707568 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e02a9868-e12c-4a65-9ba5-4a5965131b5b-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.707662 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e02a9868-e12c-4a65-9ba5-4a5965131b5b-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.707829 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e02a9868-e12c-4a65-9ba5-4a5965131b5b-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.809207 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e02a9868-e12c-4a65-9ba5-4a5965131b5b-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.809269 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e02a9868-e12c-4a65-9ba5-4a5965131b5b-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.809364 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-149889e2-65b8-4663-a4ac-a48e48736700\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-149889e2-65b8-4663-a4ac-a48e48736700\") pod \"prometheus-metric-storage-0\" (UID: \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.809389 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e02a9868-e12c-4a65-9ba5-4a5965131b5b-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.809409 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkzm4\" (UniqueName: \"kubernetes.io/projected/e02a9868-e12c-4a65-9ba5-4a5965131b5b-kube-api-access-nkzm4\") pod \"prometheus-metric-storage-0\" (UID: \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.809433 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e02a9868-e12c-4a65-9ba5-4a5965131b5b-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.809483 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e02a9868-e12c-4a65-9ba5-4a5965131b5b-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.809546 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e02a9868-e12c-4a65-9ba5-4a5965131b5b-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.809581 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e02a9868-e12c-4a65-9ba5-4a5965131b5b-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.809620 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e02a9868-e12c-4a65-9ba5-4a5965131b5b-config\") pod \"prometheus-metric-storage-0\" (UID: \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.819283 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e02a9868-e12c-4a65-9ba5-4a5965131b5b-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.835279 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e02a9868-e12c-4a65-9ba5-4a5965131b5b-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.840634 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e02a9868-e12c-4a65-9ba5-4a5965131b5b-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.846085 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e02a9868-e12c-4a65-9ba5-4a5965131b5b-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.847852 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e02a9868-e12c-4a65-9ba5-4a5965131b5b-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.855831 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e02a9868-e12c-4a65-9ba5-4a5965131b5b-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.890106 4812 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.890163 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-149889e2-65b8-4663-a4ac-a48e48736700\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-149889e2-65b8-4663-a4ac-a48e48736700\") pod \"prometheus-metric-storage-0\" (UID: \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2b50b93c959874b648bd27e3349ab287881b6c268869bdab57c6de3e2a9a9419/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.890817 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e02a9868-e12c-4a65-9ba5-4a5965131b5b-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.895119 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e02a9868-e12c-4a65-9ba5-4a5965131b5b-config\") pod \"prometheus-metric-storage-0\" (UID: \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.918079 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkzm4\" (UniqueName: \"kubernetes.io/projected/e02a9868-e12c-4a65-9ba5-4a5965131b5b-kube-api-access-nkzm4\") pod \"prometheus-metric-storage-0\" (UID: \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:52:09 crc kubenswrapper[4812]: I0216 13:52:09.941426 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-149889e2-65b8-4663-a4ac-a48e48736700\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-149889e2-65b8-4663-a4ac-a48e48736700\") pod \"prometheus-metric-storage-0\" (UID: \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:52:10 crc kubenswrapper[4812]: I0216 13:52:10.225906 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.013780 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-7dzhm"] Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.015289 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7dzhm" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.017586 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.017622 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.021788 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-9ks2k" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.023857 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7dzhm"] Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.064509 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-hjxr5"] Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.067714 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-hjxr5" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.121960 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-hjxr5"] Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.182960 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70-combined-ca-bundle\") pod \"ovn-controller-7dzhm\" (UID: \"2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70\") " pod="openstack/ovn-controller-7dzhm" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.183071 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/619a5cb7-30a8-4ac4-955e-d2c97ce49fda-var-lib\") pod \"ovn-controller-ovs-hjxr5\" (UID: \"619a5cb7-30a8-4ac4-955e-d2c97ce49fda\") " pod="openstack/ovn-controller-ovs-hjxr5" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.183136 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/619a5cb7-30a8-4ac4-955e-d2c97ce49fda-scripts\") pod \"ovn-controller-ovs-hjxr5\" (UID: \"619a5cb7-30a8-4ac4-955e-d2c97ce49fda\") " pod="openstack/ovn-controller-ovs-hjxr5" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.183187 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70-var-run\") pod \"ovn-controller-7dzhm\" (UID: \"2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70\") " pod="openstack/ovn-controller-7dzhm" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.183241 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70-var-log-ovn\") pod \"ovn-controller-7dzhm\" (UID: \"2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70\") " pod="openstack/ovn-controller-7dzhm" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.183275 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zflb8\" (UniqueName: \"kubernetes.io/projected/2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70-kube-api-access-zflb8\") pod \"ovn-controller-7dzhm\" (UID: \"2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70\") " pod="openstack/ovn-controller-7dzhm" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.183392 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70-var-run-ovn\") pod \"ovn-controller-7dzhm\" (UID: \"2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70\") " pod="openstack/ovn-controller-7dzhm" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.183432 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/619a5cb7-30a8-4ac4-955e-d2c97ce49fda-var-run\") pod \"ovn-controller-ovs-hjxr5\" (UID: \"619a5cb7-30a8-4ac4-955e-d2c97ce49fda\") " pod="openstack/ovn-controller-ovs-hjxr5" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.183496 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70-ovn-controller-tls-certs\") pod \"ovn-controller-7dzhm\" (UID: \"2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70\") " pod="openstack/ovn-controller-7dzhm" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.183570 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/619a5cb7-30a8-4ac4-955e-d2c97ce49fda-etc-ovs\") pod \"ovn-controller-ovs-hjxr5\" (UID: \"619a5cb7-30a8-4ac4-955e-d2c97ce49fda\") " pod="openstack/ovn-controller-ovs-hjxr5" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.183599 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ksl8\" (UniqueName: \"kubernetes.io/projected/619a5cb7-30a8-4ac4-955e-d2c97ce49fda-kube-api-access-4ksl8\") pod \"ovn-controller-ovs-hjxr5\" (UID: \"619a5cb7-30a8-4ac4-955e-d2c97ce49fda\") " pod="openstack/ovn-controller-ovs-hjxr5" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.184090 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70-scripts\") pod \"ovn-controller-7dzhm\" (UID: \"2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70\") " pod="openstack/ovn-controller-7dzhm" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.184152 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/619a5cb7-30a8-4ac4-955e-d2c97ce49fda-var-log\") pod \"ovn-controller-ovs-hjxr5\" (UID: \"619a5cb7-30a8-4ac4-955e-d2c97ce49fda\") " pod="openstack/ovn-controller-ovs-hjxr5" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.291591 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70-var-run-ovn\") pod \"ovn-controller-7dzhm\" (UID: \"2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70\") " pod="openstack/ovn-controller-7dzhm" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.291679 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/619a5cb7-30a8-4ac4-955e-d2c97ce49fda-var-run\") pod \"ovn-controller-ovs-hjxr5\" (UID: \"619a5cb7-30a8-4ac4-955e-d2c97ce49fda\") " pod="openstack/ovn-controller-ovs-hjxr5" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.292594 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70-var-run-ovn\") pod \"ovn-controller-7dzhm\" (UID: \"2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70\") " pod="openstack/ovn-controller-7dzhm" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.292914 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/619a5cb7-30a8-4ac4-955e-d2c97ce49fda-var-run\") pod \"ovn-controller-ovs-hjxr5\" (UID: \"619a5cb7-30a8-4ac4-955e-d2c97ce49fda\") " pod="openstack/ovn-controller-ovs-hjxr5" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.294629 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70-ovn-controller-tls-certs\") pod \"ovn-controller-7dzhm\" (UID: \"2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70\") " pod="openstack/ovn-controller-7dzhm" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.296336 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/619a5cb7-30a8-4ac4-955e-d2c97ce49fda-etc-ovs\") pod \"ovn-controller-ovs-hjxr5\" (UID: \"619a5cb7-30a8-4ac4-955e-d2c97ce49fda\") " pod="openstack/ovn-controller-ovs-hjxr5" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.296386 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ksl8\" (UniqueName: \"kubernetes.io/projected/619a5cb7-30a8-4ac4-955e-d2c97ce49fda-kube-api-access-4ksl8\") pod \"ovn-controller-ovs-hjxr5\" (UID: \"619a5cb7-30a8-4ac4-955e-d2c97ce49fda\") " pod="openstack/ovn-controller-ovs-hjxr5" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.296460 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70-scripts\") pod \"ovn-controller-7dzhm\" (UID: \"2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70\") " pod="openstack/ovn-controller-7dzhm" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.296494 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/619a5cb7-30a8-4ac4-955e-d2c97ce49fda-var-log\") pod \"ovn-controller-ovs-hjxr5\" (UID: \"619a5cb7-30a8-4ac4-955e-d2c97ce49fda\") " pod="openstack/ovn-controller-ovs-hjxr5" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.296523 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70-combined-ca-bundle\") pod \"ovn-controller-7dzhm\" (UID: \"2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70\") " pod="openstack/ovn-controller-7dzhm" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.296580 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/619a5cb7-30a8-4ac4-955e-d2c97ce49fda-var-lib\") pod \"ovn-controller-ovs-hjxr5\" (UID: \"619a5cb7-30a8-4ac4-955e-d2c97ce49fda\") " pod="openstack/ovn-controller-ovs-hjxr5" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.296622 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/619a5cb7-30a8-4ac4-955e-d2c97ce49fda-scripts\") pod \"ovn-controller-ovs-hjxr5\" (UID: \"619a5cb7-30a8-4ac4-955e-d2c97ce49fda\") " pod="openstack/ovn-controller-ovs-hjxr5" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.296651 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70-var-run\") pod \"ovn-controller-7dzhm\" (UID: \"2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70\") " pod="openstack/ovn-controller-7dzhm" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.296681 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70-var-log-ovn\") pod \"ovn-controller-7dzhm\" (UID: \"2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70\") " pod="openstack/ovn-controller-7dzhm" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.296706 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zflb8\" (UniqueName: \"kubernetes.io/projected/2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70-kube-api-access-zflb8\") pod \"ovn-controller-7dzhm\" (UID: \"2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70\") " pod="openstack/ovn-controller-7dzhm" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.297349 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/619a5cb7-30a8-4ac4-955e-d2c97ce49fda-var-lib\") pod \"ovn-controller-ovs-hjxr5\" (UID: \"619a5cb7-30a8-4ac4-955e-d2c97ce49fda\") " pod="openstack/ovn-controller-ovs-hjxr5" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.297615 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/619a5cb7-30a8-4ac4-955e-d2c97ce49fda-etc-ovs\") pod \"ovn-controller-ovs-hjxr5\" (UID: \"619a5cb7-30a8-4ac4-955e-d2c97ce49fda\") " pod="openstack/ovn-controller-ovs-hjxr5" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.297607 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70-var-run\") pod \"ovn-controller-7dzhm\" (UID: \"2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70\") " pod="openstack/ovn-controller-7dzhm" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.297755 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/619a5cb7-30a8-4ac4-955e-d2c97ce49fda-var-log\") pod \"ovn-controller-ovs-hjxr5\" (UID: \"619a5cb7-30a8-4ac4-955e-d2c97ce49fda\") " pod="openstack/ovn-controller-ovs-hjxr5" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.299290 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70-var-log-ovn\") pod \"ovn-controller-7dzhm\" (UID: \"2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70\") " pod="openstack/ovn-controller-7dzhm" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.300517 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70-scripts\") pod \"ovn-controller-7dzhm\" (UID: \"2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70\") " pod="openstack/ovn-controller-7dzhm" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.302381 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/619a5cb7-30a8-4ac4-955e-d2c97ce49fda-scripts\") pod \"ovn-controller-ovs-hjxr5\" (UID: \"619a5cb7-30a8-4ac4-955e-d2c97ce49fda\") " pod="openstack/ovn-controller-ovs-hjxr5" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.312930 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70-ovn-controller-tls-certs\") pod \"ovn-controller-7dzhm\" (UID: \"2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70\") " pod="openstack/ovn-controller-7dzhm" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.315919 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70-combined-ca-bundle\") pod \"ovn-controller-7dzhm\" (UID: \"2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70\") " pod="openstack/ovn-controller-7dzhm" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.316140 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ksl8\" (UniqueName: \"kubernetes.io/projected/619a5cb7-30a8-4ac4-955e-d2c97ce49fda-kube-api-access-4ksl8\") pod \"ovn-controller-ovs-hjxr5\" (UID: \"619a5cb7-30a8-4ac4-955e-d2c97ce49fda\") " pod="openstack/ovn-controller-ovs-hjxr5" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.317645 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zflb8\" (UniqueName: \"kubernetes.io/projected/2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70-kube-api-access-zflb8\") pod \"ovn-controller-7dzhm\" (UID: \"2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70\") " pod="openstack/ovn-controller-7dzhm" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.364185 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7dzhm" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.400426 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-hjxr5" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.821044 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.822613 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.830901 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-bgjpd" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.831319 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.831577 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.831704 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.832535 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.841480 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.952936 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7eae7df6-e3b7-4ac5-bb18-6b781744747d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7eae7df6-e3b7-4ac5-bb18-6b781744747d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.953021 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7eae7df6-e3b7-4ac5-bb18-6b781744747d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7eae7df6-e3b7-4ac5-bb18-6b781744747d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.953054 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6chhx\" (UniqueName: \"kubernetes.io/projected/7eae7df6-e3b7-4ac5-bb18-6b781744747d-kube-api-access-6chhx\") pod \"ovsdbserver-sb-0\" (UID: \"7eae7df6-e3b7-4ac5-bb18-6b781744747d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.953081 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eae7df6-e3b7-4ac5-bb18-6b781744747d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7eae7df6-e3b7-4ac5-bb18-6b781744747d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.953106 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7eae7df6-e3b7-4ac5-bb18-6b781744747d-config\") pod \"ovsdbserver-sb-0\" (UID: \"7eae7df6-e3b7-4ac5-bb18-6b781744747d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.953236 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7eae7df6-e3b7-4ac5-bb18-6b781744747d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7eae7df6-e3b7-4ac5-bb18-6b781744747d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.953410 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7eae7df6-e3b7-4ac5-bb18-6b781744747d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7eae7df6-e3b7-4ac5-bb18-6b781744747d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:52:12 crc kubenswrapper[4812]: I0216 13:52:12.953535 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-040880e4-8aa1-463e-89db-3dd9edd7b98c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-040880e4-8aa1-463e-89db-3dd9edd7b98c\") pod \"ovsdbserver-sb-0\" (UID: \"7eae7df6-e3b7-4ac5-bb18-6b781744747d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:52:13 crc kubenswrapper[4812]: I0216 13:52:13.055589 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7eae7df6-e3b7-4ac5-bb18-6b781744747d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7eae7df6-e3b7-4ac5-bb18-6b781744747d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:52:13 crc kubenswrapper[4812]: I0216 13:52:13.055668 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-040880e4-8aa1-463e-89db-3dd9edd7b98c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-040880e4-8aa1-463e-89db-3dd9edd7b98c\") pod \"ovsdbserver-sb-0\" (UID: \"7eae7df6-e3b7-4ac5-bb18-6b781744747d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:52:13 crc kubenswrapper[4812]: I0216 13:52:13.055715 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7eae7df6-e3b7-4ac5-bb18-6b781744747d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7eae7df6-e3b7-4ac5-bb18-6b781744747d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:52:13 crc kubenswrapper[4812]: I0216 13:52:13.055761 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7eae7df6-e3b7-4ac5-bb18-6b781744747d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7eae7df6-e3b7-4ac5-bb18-6b781744747d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:52:13 crc kubenswrapper[4812]: I0216 13:52:13.055780 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6chhx\" (UniqueName: \"kubernetes.io/projected/7eae7df6-e3b7-4ac5-bb18-6b781744747d-kube-api-access-6chhx\") pod \"ovsdbserver-sb-0\" (UID: \"7eae7df6-e3b7-4ac5-bb18-6b781744747d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:52:13 crc kubenswrapper[4812]: I0216 13:52:13.055798 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eae7df6-e3b7-4ac5-bb18-6b781744747d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7eae7df6-e3b7-4ac5-bb18-6b781744747d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:52:13 crc kubenswrapper[4812]: I0216 13:52:13.055817 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7eae7df6-e3b7-4ac5-bb18-6b781744747d-config\") pod \"ovsdbserver-sb-0\" (UID: \"7eae7df6-e3b7-4ac5-bb18-6b781744747d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:52:13 crc kubenswrapper[4812]: I0216 13:52:13.055852 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7eae7df6-e3b7-4ac5-bb18-6b781744747d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7eae7df6-e3b7-4ac5-bb18-6b781744747d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:52:13 crc kubenswrapper[4812]: I0216 13:52:13.057690 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7eae7df6-e3b7-4ac5-bb18-6b781744747d-config\") pod \"ovsdbserver-sb-0\" (UID: \"7eae7df6-e3b7-4ac5-bb18-6b781744747d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:52:13 crc kubenswrapper[4812]: I0216 13:52:13.057897 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7eae7df6-e3b7-4ac5-bb18-6b781744747d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7eae7df6-e3b7-4ac5-bb18-6b781744747d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:52:13 crc kubenswrapper[4812]: I0216 13:52:13.059086 4812 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 13:52:13 crc kubenswrapper[4812]: I0216 13:52:13.059108 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-040880e4-8aa1-463e-89db-3dd9edd7b98c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-040880e4-8aa1-463e-89db-3dd9edd7b98c\") pod \"ovsdbserver-sb-0\" (UID: \"7eae7df6-e3b7-4ac5-bb18-6b781744747d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/86dd96268d956529926d32852c9c160346ea8d0cda620833b04e1d8d66501b2c/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 16 13:52:13 crc kubenswrapper[4812]: I0216 13:52:13.061111 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7eae7df6-e3b7-4ac5-bb18-6b781744747d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7eae7df6-e3b7-4ac5-bb18-6b781744747d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:52:13 crc kubenswrapper[4812]: I0216 13:52:13.061744 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7eae7df6-e3b7-4ac5-bb18-6b781744747d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7eae7df6-e3b7-4ac5-bb18-6b781744747d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:52:13 crc kubenswrapper[4812]: I0216 13:52:13.063065 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7eae7df6-e3b7-4ac5-bb18-6b781744747d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7eae7df6-e3b7-4ac5-bb18-6b781744747d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:52:13 crc kubenswrapper[4812]: I0216 13:52:13.071439 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eae7df6-e3b7-4ac5-bb18-6b781744747d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7eae7df6-e3b7-4ac5-bb18-6b781744747d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:52:13 crc kubenswrapper[4812]: I0216 13:52:13.090239 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6chhx\" (UniqueName: \"kubernetes.io/projected/7eae7df6-e3b7-4ac5-bb18-6b781744747d-kube-api-access-6chhx\") pod \"ovsdbserver-sb-0\" (UID: \"7eae7df6-e3b7-4ac5-bb18-6b781744747d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:52:13 crc kubenswrapper[4812]: I0216 13:52:13.171004 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-040880e4-8aa1-463e-89db-3dd9edd7b98c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-040880e4-8aa1-463e-89db-3dd9edd7b98c\") pod \"ovsdbserver-sb-0\" (UID: \"7eae7df6-e3b7-4ac5-bb18-6b781744747d\") " pod="openstack/ovsdbserver-sb-0" Feb 16 13:52:13 crc kubenswrapper[4812]: I0216 13:52:13.181212 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 16 13:52:16 crc kubenswrapper[4812]: I0216 13:52:16.563576 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 13:52:16 crc kubenswrapper[4812]: I0216 13:52:16.565620 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 16 13:52:16 crc kubenswrapper[4812]: I0216 13:52:16.571186 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 16 13:52:16 crc kubenswrapper[4812]: I0216 13:52:16.571597 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-mb2xk" Feb 16 13:52:16 crc kubenswrapper[4812]: I0216 13:52:16.571743 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 13:52:16 crc kubenswrapper[4812]: I0216 13:52:16.571802 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 16 13:52:16 crc kubenswrapper[4812]: I0216 13:52:16.599268 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 16 13:52:16 crc kubenswrapper[4812]: I0216 13:52:16.683195 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-23d66caa-b944-48c7-a950-6926d0cb31b1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23d66caa-b944-48c7-a950-6926d0cb31b1\") pod \"ovsdbserver-nb-0\" (UID: \"2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:52:16 crc kubenswrapper[4812]: I0216 13:52:16.683263 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:52:16 crc kubenswrapper[4812]: I0216 13:52:16.683288 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d4q9\" (UniqueName: \"kubernetes.io/projected/2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516-kube-api-access-4d4q9\") pod \"ovsdbserver-nb-0\" (UID: \"2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:52:16 crc kubenswrapper[4812]: I0216 13:52:16.683402 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:52:16 crc kubenswrapper[4812]: I0216 13:52:16.683426 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516-config\") pod \"ovsdbserver-nb-0\" (UID: \"2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:52:16 crc kubenswrapper[4812]: I0216 13:52:16.683477 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:52:16 crc kubenswrapper[4812]: I0216 13:52:16.683507 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:52:16 crc kubenswrapper[4812]: I0216 13:52:16.683534 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:52:16 crc kubenswrapper[4812]: I0216 13:52:16.784532 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-23d66caa-b944-48c7-a950-6926d0cb31b1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23d66caa-b944-48c7-a950-6926d0cb31b1\") pod \"ovsdbserver-nb-0\" (UID: \"2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:52:16 crc kubenswrapper[4812]: I0216 13:52:16.784635 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:52:16 crc kubenswrapper[4812]: I0216 13:52:16.784663 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d4q9\" (UniqueName: \"kubernetes.io/projected/2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516-kube-api-access-4d4q9\") pod \"ovsdbserver-nb-0\" (UID: \"2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:52:16 crc kubenswrapper[4812]: I0216 13:52:16.784825 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:52:16 crc kubenswrapper[4812]: I0216 13:52:16.784845 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516-config\") pod \"ovsdbserver-nb-0\" (UID: \"2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:52:16 crc kubenswrapper[4812]: I0216 13:52:16.784890 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:52:16 crc kubenswrapper[4812]: I0216 13:52:16.784914 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:52:16 crc kubenswrapper[4812]: I0216 13:52:16.784950 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:52:16 crc kubenswrapper[4812]: I0216 13:52:16.785362 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:52:16 crc kubenswrapper[4812]: I0216 13:52:16.785992 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516-config\") pod \"ovsdbserver-nb-0\" (UID: \"2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:52:16 crc kubenswrapper[4812]: I0216 13:52:16.786363 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:52:16 crc kubenswrapper[4812]: I0216 13:52:16.787969 4812 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 13:52:16 crc kubenswrapper[4812]: I0216 13:52:16.787995 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-23d66caa-b944-48c7-a950-6926d0cb31b1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23d66caa-b944-48c7-a950-6926d0cb31b1\") pod \"ovsdbserver-nb-0\" (UID: \"2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8250a6eebe4f07d07c091dd5835ce94d51ed9463c4d463725b72dcf07d1eca60/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 16 13:52:16 crc kubenswrapper[4812]: I0216 13:52:16.799426 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:52:16 crc kubenswrapper[4812]: I0216 13:52:16.799616 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:52:16 crc kubenswrapper[4812]: I0216 13:52:16.804745 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d4q9\" (UniqueName: \"kubernetes.io/projected/2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516-kube-api-access-4d4q9\") pod \"ovsdbserver-nb-0\" (UID: \"2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:52:16 crc kubenswrapper[4812]: I0216 13:52:16.804792 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:52:16 crc kubenswrapper[4812]: I0216 13:52:16.835567 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-23d66caa-b944-48c7-a950-6926d0cb31b1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23d66caa-b944-48c7-a950-6926d0cb31b1\") pod \"ovsdbserver-nb-0\" (UID: \"2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516\") " pod="openstack/ovsdbserver-nb-0" Feb 16 13:52:17 crc kubenswrapper[4812]: I0216 13:52:16.923518 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.531965 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-6xb2f"] Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.533775 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6xb2f" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.538307 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-dockercfg-jn2lh" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.538563 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca-bundle" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.538720 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-grpc" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.538874 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-config" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.539123 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-http" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.567362 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-6xb2f"] Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.631756 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/a89a28de-f5cd-413a-bbbc-0f58c3f5fd1f-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-6xb2f\" (UID: \"a89a28de-f5cd-413a-bbbc-0f58c3f5fd1f\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6xb2f" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.631903 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a89a28de-f5cd-413a-bbbc-0f58c3f5fd1f-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-6xb2f\" (UID: \"a89a28de-f5cd-413a-bbbc-0f58c3f5fd1f\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6xb2f" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.631970 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a89a28de-f5cd-413a-bbbc-0f58c3f5fd1f-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-6xb2f\" (UID: \"a89a28de-f5cd-413a-bbbc-0f58c3f5fd1f\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6xb2f" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.632034 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sr69\" (UniqueName: \"kubernetes.io/projected/a89a28de-f5cd-413a-bbbc-0f58c3f5fd1f-kube-api-access-4sr69\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-6xb2f\" (UID: \"a89a28de-f5cd-413a-bbbc-0f58c3f5fd1f\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6xb2f" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.632102 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/a89a28de-f5cd-413a-bbbc-0f58c3f5fd1f-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-6xb2f\" (UID: \"a89a28de-f5cd-413a-bbbc-0f58c3f5fd1f\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6xb2f" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.710728 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-p88ww"] Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.714698 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-p88ww" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.720489 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-grpc" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.720820 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-http" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.721078 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-loki-s3" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.732315 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-p88ww"] Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.733520 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a89a28de-f5cd-413a-bbbc-0f58c3f5fd1f-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-6xb2f\" (UID: \"a89a28de-f5cd-413a-bbbc-0f58c3f5fd1f\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6xb2f" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.733606 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a89a28de-f5cd-413a-bbbc-0f58c3f5fd1f-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-6xb2f\" (UID: \"a89a28de-f5cd-413a-bbbc-0f58c3f5fd1f\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6xb2f" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.733662 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sr69\" (UniqueName: \"kubernetes.io/projected/a89a28de-f5cd-413a-bbbc-0f58c3f5fd1f-kube-api-access-4sr69\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-6xb2f\" (UID: \"a89a28de-f5cd-413a-bbbc-0f58c3f5fd1f\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6xb2f" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.733712 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/a89a28de-f5cd-413a-bbbc-0f58c3f5fd1f-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-6xb2f\" (UID: \"a89a28de-f5cd-413a-bbbc-0f58c3f5fd1f\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6xb2f" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.733820 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/a89a28de-f5cd-413a-bbbc-0f58c3f5fd1f-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-6xb2f\" (UID: \"a89a28de-f5cd-413a-bbbc-0f58c3f5fd1f\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6xb2f" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.735515 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a89a28de-f5cd-413a-bbbc-0f58c3f5fd1f-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-6xb2f\" (UID: \"a89a28de-f5cd-413a-bbbc-0f58c3f5fd1f\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6xb2f" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.736110 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a89a28de-f5cd-413a-bbbc-0f58c3f5fd1f-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-6xb2f\" (UID: \"a89a28de-f5cd-413a-bbbc-0f58c3f5fd1f\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6xb2f" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.746350 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/a89a28de-f5cd-413a-bbbc-0f58c3f5fd1f-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-6xb2f\" (UID: \"a89a28de-f5cd-413a-bbbc-0f58c3f5fd1f\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6xb2f" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.767763 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/a89a28de-f5cd-413a-bbbc-0f58c3f5fd1f-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-6xb2f\" (UID: \"a89a28de-f5cd-413a-bbbc-0f58c3f5fd1f\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6xb2f" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.773371 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sr69\" (UniqueName: \"kubernetes.io/projected/a89a28de-f5cd-413a-bbbc-0f58c3f5fd1f-kube-api-access-4sr69\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-6xb2f\" (UID: \"a89a28de-f5cd-413a-bbbc-0f58c3f5fd1f\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6xb2f" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.837783 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/826ded0a-246d-40b7-87d1-22fa8224d506-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-p88ww\" (UID: \"826ded0a-246d-40b7-87d1-22fa8224d506\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-p88ww" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.837876 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/826ded0a-246d-40b7-87d1-22fa8224d506-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-p88ww\" (UID: \"826ded0a-246d-40b7-87d1-22fa8224d506\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-p88ww" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.837938 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/826ded0a-246d-40b7-87d1-22fa8224d506-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-p88ww\" (UID: \"826ded0a-246d-40b7-87d1-22fa8224d506\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-p88ww" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.837972 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/826ded0a-246d-40b7-87d1-22fa8224d506-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-p88ww\" (UID: \"826ded0a-246d-40b7-87d1-22fa8224d506\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-p88ww" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.838043 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cnr8\" (UniqueName: \"kubernetes.io/projected/826ded0a-246d-40b7-87d1-22fa8224d506-kube-api-access-2cnr8\") pod \"cloudkitty-lokistack-querier-58c84b5844-p88ww\" (UID: \"826ded0a-246d-40b7-87d1-22fa8224d506\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-p88ww" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.838126 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/826ded0a-246d-40b7-87d1-22fa8224d506-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-p88ww\" (UID: \"826ded0a-246d-40b7-87d1-22fa8224d506\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-p88ww" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.841613 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-2l86h"] Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.847421 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-2l86h" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.856760 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-2l86h"] Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.862578 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-grpc" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.862829 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-http" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.863125 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6xb2f" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.940601 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cnr8\" (UniqueName: \"kubernetes.io/projected/826ded0a-246d-40b7-87d1-22fa8224d506-kube-api-access-2cnr8\") pod \"cloudkitty-lokistack-querier-58c84b5844-p88ww\" (UID: \"826ded0a-246d-40b7-87d1-22fa8224d506\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-p88ww" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.940675 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/826ded0a-246d-40b7-87d1-22fa8224d506-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-p88ww\" (UID: \"826ded0a-246d-40b7-87d1-22fa8224d506\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-p88ww" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.940715 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/d909c793-0634-48f0-8f71-4f21dc9979af-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-2l86h\" (UID: \"d909c793-0634-48f0-8f71-4f21dc9979af\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-2l86h" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.940766 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d909c793-0634-48f0-8f71-4f21dc9979af-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-2l86h\" (UID: \"d909c793-0634-48f0-8f71-4f21dc9979af\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-2l86h" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.940813 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb8sg\" (UniqueName: \"kubernetes.io/projected/d909c793-0634-48f0-8f71-4f21dc9979af-kube-api-access-xb8sg\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-2l86h\" (UID: \"d909c793-0634-48f0-8f71-4f21dc9979af\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-2l86h" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.940860 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/d909c793-0634-48f0-8f71-4f21dc9979af-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-2l86h\" (UID: \"d909c793-0634-48f0-8f71-4f21dc9979af\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-2l86h" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.940911 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d909c793-0634-48f0-8f71-4f21dc9979af-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-2l86h\" (UID: \"d909c793-0634-48f0-8f71-4f21dc9979af\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-2l86h" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.940959 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/826ded0a-246d-40b7-87d1-22fa8224d506-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-p88ww\" (UID: \"826ded0a-246d-40b7-87d1-22fa8224d506\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-p88ww" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.941039 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/826ded0a-246d-40b7-87d1-22fa8224d506-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-p88ww\" (UID: \"826ded0a-246d-40b7-87d1-22fa8224d506\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-p88ww" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.941077 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/826ded0a-246d-40b7-87d1-22fa8224d506-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-p88ww\" (UID: \"826ded0a-246d-40b7-87d1-22fa8224d506\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-p88ww" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.941111 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/826ded0a-246d-40b7-87d1-22fa8224d506-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-p88ww\" (UID: \"826ded0a-246d-40b7-87d1-22fa8224d506\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-p88ww" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.943123 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/826ded0a-246d-40b7-87d1-22fa8224d506-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-p88ww\" (UID: \"826ded0a-246d-40b7-87d1-22fa8224d506\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-p88ww" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.947563 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/826ded0a-246d-40b7-87d1-22fa8224d506-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-p88ww\" (UID: \"826ded0a-246d-40b7-87d1-22fa8224d506\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-p88ww" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.947846 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/826ded0a-246d-40b7-87d1-22fa8224d506-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-p88ww\" (UID: \"826ded0a-246d-40b7-87d1-22fa8224d506\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-p88ww" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.952167 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/826ded0a-246d-40b7-87d1-22fa8224d506-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-p88ww\" (UID: \"826ded0a-246d-40b7-87d1-22fa8224d506\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-p88ww" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.974508 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/826ded0a-246d-40b7-87d1-22fa8224d506-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-p88ww\" (UID: \"826ded0a-246d-40b7-87d1-22fa8224d506\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-p88ww" Feb 16 13:52:19 crc kubenswrapper[4812]: I0216 13:52:19.978321 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cnr8\" (UniqueName: \"kubernetes.io/projected/826ded0a-246d-40b7-87d1-22fa8224d506-kube-api-access-2cnr8\") pod \"cloudkitty-lokistack-querier-58c84b5844-p88ww\" (UID: \"826ded0a-246d-40b7-87d1-22fa8224d506\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-p88ww" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.029229 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-hb5vr"] Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.030995 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hb5vr" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.044702 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.044980 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.045123 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-http" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.045348 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-dockercfg-6478v" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.045578 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-client-http" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.045829 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.045851 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/d909c793-0634-48f0-8f71-4f21dc9979af-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-2l86h\" (UID: \"d909c793-0634-48f0-8f71-4f21dc9979af\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-2l86h" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.045983 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d909c793-0634-48f0-8f71-4f21dc9979af-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-2l86h\" (UID: \"d909c793-0634-48f0-8f71-4f21dc9979af\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-2l86h" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.046003 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway-ca-bundle" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.046164 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/d909c793-0634-48f0-8f71-4f21dc9979af-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-2l86h\" (UID: \"d909c793-0634-48f0-8f71-4f21dc9979af\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-2l86h" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.046206 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d909c793-0634-48f0-8f71-4f21dc9979af-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-2l86h\" (UID: \"d909c793-0634-48f0-8f71-4f21dc9979af\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-2l86h" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.046268 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb8sg\" (UniqueName: \"kubernetes.io/projected/d909c793-0634-48f0-8f71-4f21dc9979af-kube-api-access-xb8sg\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-2l86h\" (UID: \"d909c793-0634-48f0-8f71-4f21dc9979af\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-2l86h" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.047965 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d909c793-0634-48f0-8f71-4f21dc9979af-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-2l86h\" (UID: \"d909c793-0634-48f0-8f71-4f21dc9979af\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-2l86h" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.048380 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d909c793-0634-48f0-8f71-4f21dc9979af-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-2l86h\" (UID: \"d909c793-0634-48f0-8f71-4f21dc9979af\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-2l86h" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.050717 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/d909c793-0634-48f0-8f71-4f21dc9979af-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-2l86h\" (UID: \"d909c793-0634-48f0-8f71-4f21dc9979af\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-2l86h" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.060526 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/d909c793-0634-48f0-8f71-4f21dc9979af-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-2l86h\" (UID: \"d909c793-0634-48f0-8f71-4f21dc9979af\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-2l86h" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.069460 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-j48px"] Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.071054 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-j48px" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.080911 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb8sg\" (UniqueName: \"kubernetes.io/projected/d909c793-0634-48f0-8f71-4f21dc9979af-kube-api-access-xb8sg\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-2l86h\" (UID: \"d909c793-0634-48f0-8f71-4f21dc9979af\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-2l86h" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.092654 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-hb5vr"] Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.106367 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-j48px"] Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.138640 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-p88ww" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.148243 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/cef2c2bd-5dea-4bf2-8fcf-a3cadc541023-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-j48px\" (UID: \"cef2c2bd-5dea-4bf2-8fcf-a3cadc541023\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-j48px" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.148295 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/cef2c2bd-5dea-4bf2-8fcf-a3cadc541023-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-j48px\" (UID: \"cef2c2bd-5dea-4bf2-8fcf-a3cadc541023\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-j48px" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.148319 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d8ae81a-a9ec-4f2f-8369-0164c6c1923c-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hb5vr\" (UID: \"6d8ae81a-a9ec-4f2f-8369-0164c6c1923c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hb5vr" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.148369 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/6d8ae81a-a9ec-4f2f-8369-0164c6c1923c-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hb5vr\" (UID: \"6d8ae81a-a9ec-4f2f-8369-0164c6c1923c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hb5vr" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.148402 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cef2c2bd-5dea-4bf2-8fcf-a3cadc541023-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-j48px\" (UID: \"cef2c2bd-5dea-4bf2-8fcf-a3cadc541023\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-j48px" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.148479 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/6d8ae81a-a9ec-4f2f-8369-0164c6c1923c-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hb5vr\" (UID: \"6d8ae81a-a9ec-4f2f-8369-0164c6c1923c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hb5vr" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.148496 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/cef2c2bd-5dea-4bf2-8fcf-a3cadc541023-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-j48px\" (UID: \"cef2c2bd-5dea-4bf2-8fcf-a3cadc541023\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-j48px" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.148518 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/6d8ae81a-a9ec-4f2f-8369-0164c6c1923c-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hb5vr\" (UID: \"6d8ae81a-a9ec-4f2f-8369-0164c6c1923c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hb5vr" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.148541 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cef2c2bd-5dea-4bf2-8fcf-a3cadc541023-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-j48px\" (UID: \"cef2c2bd-5dea-4bf2-8fcf-a3cadc541023\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-j48px" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.148564 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdxtv\" (UniqueName: \"kubernetes.io/projected/cef2c2bd-5dea-4bf2-8fcf-a3cadc541023-kube-api-access-gdxtv\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-j48px\" (UID: \"cef2c2bd-5dea-4bf2-8fcf-a3cadc541023\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-j48px" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.148582 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/6d8ae81a-a9ec-4f2f-8369-0164c6c1923c-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hb5vr\" (UID: \"6d8ae81a-a9ec-4f2f-8369-0164c6c1923c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hb5vr" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.148612 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/cef2c2bd-5dea-4bf2-8fcf-a3cadc541023-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-j48px\" (UID: \"cef2c2bd-5dea-4bf2-8fcf-a3cadc541023\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-j48px" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.148634 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d8ae81a-a9ec-4f2f-8369-0164c6c1923c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hb5vr\" (UID: \"6d8ae81a-a9ec-4f2f-8369-0164c6c1923c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hb5vr" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.148658 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d8ae81a-a9ec-4f2f-8369-0164c6c1923c-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hb5vr\" (UID: \"6d8ae81a-a9ec-4f2f-8369-0164c6c1923c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hb5vr" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.148677 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cef2c2bd-5dea-4bf2-8fcf-a3cadc541023-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-j48px\" (UID: \"cef2c2bd-5dea-4bf2-8fcf-a3cadc541023\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-j48px" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.148701 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmwqc\" (UniqueName: \"kubernetes.io/projected/6d8ae81a-a9ec-4f2f-8369-0164c6c1923c-kube-api-access-lmwqc\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hb5vr\" (UID: \"6d8ae81a-a9ec-4f2f-8369-0164c6c1923c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hb5vr" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.148730 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/6d8ae81a-a9ec-4f2f-8369-0164c6c1923c-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hb5vr\" (UID: \"6d8ae81a-a9ec-4f2f-8369-0164c6c1923c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hb5vr" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.148770 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/cef2c2bd-5dea-4bf2-8fcf-a3cadc541023-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-j48px\" (UID: \"cef2c2bd-5dea-4bf2-8fcf-a3cadc541023\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-j48px" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.209983 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-2l86h" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.250137 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d8ae81a-a9ec-4f2f-8369-0164c6c1923c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hb5vr\" (UID: \"6d8ae81a-a9ec-4f2f-8369-0164c6c1923c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hb5vr" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.250224 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d8ae81a-a9ec-4f2f-8369-0164c6c1923c-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hb5vr\" (UID: \"6d8ae81a-a9ec-4f2f-8369-0164c6c1923c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hb5vr" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.250277 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cef2c2bd-5dea-4bf2-8fcf-a3cadc541023-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-j48px\" (UID: \"cef2c2bd-5dea-4bf2-8fcf-a3cadc541023\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-j48px" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.250311 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmwqc\" (UniqueName: \"kubernetes.io/projected/6d8ae81a-a9ec-4f2f-8369-0164c6c1923c-kube-api-access-lmwqc\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hb5vr\" (UID: \"6d8ae81a-a9ec-4f2f-8369-0164c6c1923c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hb5vr" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.250347 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/6d8ae81a-a9ec-4f2f-8369-0164c6c1923c-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hb5vr\" (UID: \"6d8ae81a-a9ec-4f2f-8369-0164c6c1923c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hb5vr" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.250391 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/cef2c2bd-5dea-4bf2-8fcf-a3cadc541023-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-j48px\" (UID: \"cef2c2bd-5dea-4bf2-8fcf-a3cadc541023\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-j48px" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.250433 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/cef2c2bd-5dea-4bf2-8fcf-a3cadc541023-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-j48px\" (UID: \"cef2c2bd-5dea-4bf2-8fcf-a3cadc541023\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-j48px" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.250539 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/cef2c2bd-5dea-4bf2-8fcf-a3cadc541023-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-j48px\" (UID: \"cef2c2bd-5dea-4bf2-8fcf-a3cadc541023\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-j48px" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.250577 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d8ae81a-a9ec-4f2f-8369-0164c6c1923c-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hb5vr\" (UID: \"6d8ae81a-a9ec-4f2f-8369-0164c6c1923c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hb5vr" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.250647 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/6d8ae81a-a9ec-4f2f-8369-0164c6c1923c-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hb5vr\" (UID: \"6d8ae81a-a9ec-4f2f-8369-0164c6c1923c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hb5vr" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.250679 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cef2c2bd-5dea-4bf2-8fcf-a3cadc541023-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-j48px\" (UID: \"cef2c2bd-5dea-4bf2-8fcf-a3cadc541023\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-j48px" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.250733 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/6d8ae81a-a9ec-4f2f-8369-0164c6c1923c-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hb5vr\" (UID: \"6d8ae81a-a9ec-4f2f-8369-0164c6c1923c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hb5vr" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.250764 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/cef2c2bd-5dea-4bf2-8fcf-a3cadc541023-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-j48px\" (UID: \"cef2c2bd-5dea-4bf2-8fcf-a3cadc541023\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-j48px" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.250804 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/6d8ae81a-a9ec-4f2f-8369-0164c6c1923c-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hb5vr\" (UID: \"6d8ae81a-a9ec-4f2f-8369-0164c6c1923c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hb5vr" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.250843 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cef2c2bd-5dea-4bf2-8fcf-a3cadc541023-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-j48px\" (UID: \"cef2c2bd-5dea-4bf2-8fcf-a3cadc541023\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-j48px" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.250866 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdxtv\" (UniqueName: \"kubernetes.io/projected/cef2c2bd-5dea-4bf2-8fcf-a3cadc541023-kube-api-access-gdxtv\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-j48px\" (UID: \"cef2c2bd-5dea-4bf2-8fcf-a3cadc541023\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-j48px" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.250899 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/6d8ae81a-a9ec-4f2f-8369-0164c6c1923c-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hb5vr\" (UID: \"6d8ae81a-a9ec-4f2f-8369-0164c6c1923c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hb5vr" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.250938 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/cef2c2bd-5dea-4bf2-8fcf-a3cadc541023-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-j48px\" (UID: \"cef2c2bd-5dea-4bf2-8fcf-a3cadc541023\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-j48px" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.252375 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/cef2c2bd-5dea-4bf2-8fcf-a3cadc541023-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-j48px\" (UID: \"cef2c2bd-5dea-4bf2-8fcf-a3cadc541023\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-j48px" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.253129 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d8ae81a-a9ec-4f2f-8369-0164c6c1923c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hb5vr\" (UID: \"6d8ae81a-a9ec-4f2f-8369-0164c6c1923c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hb5vr" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.253843 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d8ae81a-a9ec-4f2f-8369-0164c6c1923c-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hb5vr\" (UID: \"6d8ae81a-a9ec-4f2f-8369-0164c6c1923c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hb5vr" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.254669 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cef2c2bd-5dea-4bf2-8fcf-a3cadc541023-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-j48px\" (UID: \"cef2c2bd-5dea-4bf2-8fcf-a3cadc541023\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-j48px" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.255951 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/6d8ae81a-a9ec-4f2f-8369-0164c6c1923c-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hb5vr\" (UID: \"6d8ae81a-a9ec-4f2f-8369-0164c6c1923c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hb5vr" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.257615 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cef2c2bd-5dea-4bf2-8fcf-a3cadc541023-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-j48px\" (UID: \"cef2c2bd-5dea-4bf2-8fcf-a3cadc541023\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-j48px" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.258086 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d8ae81a-a9ec-4f2f-8369-0164c6c1923c-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hb5vr\" (UID: \"6d8ae81a-a9ec-4f2f-8369-0164c6c1923c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hb5vr" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.260005 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/6d8ae81a-a9ec-4f2f-8369-0164c6c1923c-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hb5vr\" (UID: \"6d8ae81a-a9ec-4f2f-8369-0164c6c1923c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hb5vr" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.262162 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/cef2c2bd-5dea-4bf2-8fcf-a3cadc541023-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-j48px\" (UID: \"cef2c2bd-5dea-4bf2-8fcf-a3cadc541023\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-j48px" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.262298 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/cef2c2bd-5dea-4bf2-8fcf-a3cadc541023-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-j48px\" (UID: \"cef2c2bd-5dea-4bf2-8fcf-a3cadc541023\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-j48px" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.263160 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/6d8ae81a-a9ec-4f2f-8369-0164c6c1923c-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hb5vr\" (UID: \"6d8ae81a-a9ec-4f2f-8369-0164c6c1923c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hb5vr" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.263410 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cef2c2bd-5dea-4bf2-8fcf-a3cadc541023-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-j48px\" (UID: \"cef2c2bd-5dea-4bf2-8fcf-a3cadc541023\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-j48px" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.282542 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/6d8ae81a-a9ec-4f2f-8369-0164c6c1923c-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hb5vr\" (UID: \"6d8ae81a-a9ec-4f2f-8369-0164c6c1923c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hb5vr" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.282543 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/cef2c2bd-5dea-4bf2-8fcf-a3cadc541023-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-j48px\" (UID: \"cef2c2bd-5dea-4bf2-8fcf-a3cadc541023\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-j48px" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.282745 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/6d8ae81a-a9ec-4f2f-8369-0164c6c1923c-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hb5vr\" (UID: \"6d8ae81a-a9ec-4f2f-8369-0164c6c1923c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hb5vr" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.282992 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/cef2c2bd-5dea-4bf2-8fcf-a3cadc541023-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-j48px\" (UID: \"cef2c2bd-5dea-4bf2-8fcf-a3cadc541023\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-j48px" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.283018 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmwqc\" (UniqueName: \"kubernetes.io/projected/6d8ae81a-a9ec-4f2f-8369-0164c6c1923c-kube-api-access-lmwqc\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hb5vr\" (UID: \"6d8ae81a-a9ec-4f2f-8369-0164c6c1923c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hb5vr" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.283806 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdxtv\" (UniqueName: \"kubernetes.io/projected/cef2c2bd-5dea-4bf2-8fcf-a3cadc541023-kube-api-access-gdxtv\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-j48px\" (UID: \"cef2c2bd-5dea-4bf2-8fcf-a3cadc541023\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-j48px" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.404780 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hb5vr" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.420479 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-j48px" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.681038 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.682169 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.684085 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-grpc" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.684323 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-http" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.702664 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.757754 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/51f12264-af08-4cf2-9e76-98dc91b0b7a8-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"51f12264-af08-4cf2-9e76-98dc91b0b7a8\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.757841 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"51f12264-af08-4cf2-9e76-98dc91b0b7a8\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.757869 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51f12264-af08-4cf2-9e76-98dc91b0b7a8-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"51f12264-af08-4cf2-9e76-98dc91b0b7a8\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.757894 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/51f12264-af08-4cf2-9e76-98dc91b0b7a8-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"51f12264-af08-4cf2-9e76-98dc91b0b7a8\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.757915 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2t5w\" (UniqueName: \"kubernetes.io/projected/51f12264-af08-4cf2-9e76-98dc91b0b7a8-kube-api-access-z2t5w\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"51f12264-af08-4cf2-9e76-98dc91b0b7a8\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.757954 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/51f12264-af08-4cf2-9e76-98dc91b0b7a8-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"51f12264-af08-4cf2-9e76-98dc91b0b7a8\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.757998 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"51f12264-af08-4cf2-9e76-98dc91b0b7a8\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.758040 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51f12264-af08-4cf2-9e76-98dc91b0b7a8-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"51f12264-af08-4cf2-9e76-98dc91b0b7a8\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.780276 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.781734 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.783361 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-http" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.783792 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-grpc" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.791696 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.859803 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/0a320041-5efb-4a26-b9e4-cdf85da40717-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"0a320041-5efb-4a26-b9e4-cdf85da40717\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.859888 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/51f12264-af08-4cf2-9e76-98dc91b0b7a8-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"51f12264-af08-4cf2-9e76-98dc91b0b7a8\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.859936 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48xwd\" (UniqueName: \"kubernetes.io/projected/0a320041-5efb-4a26-b9e4-cdf85da40717-kube-api-access-48xwd\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"0a320041-5efb-4a26-b9e4-cdf85da40717\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.859973 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/0a320041-5efb-4a26-b9e4-cdf85da40717-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"0a320041-5efb-4a26-b9e4-cdf85da40717\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.859999 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"51f12264-af08-4cf2-9e76-98dc91b0b7a8\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.860024 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51f12264-af08-4cf2-9e76-98dc91b0b7a8-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"51f12264-af08-4cf2-9e76-98dc91b0b7a8\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.860049 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/51f12264-af08-4cf2-9e76-98dc91b0b7a8-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"51f12264-af08-4cf2-9e76-98dc91b0b7a8\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.860069 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2t5w\" (UniqueName: \"kubernetes.io/projected/51f12264-af08-4cf2-9e76-98dc91b0b7a8-kube-api-access-z2t5w\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"51f12264-af08-4cf2-9e76-98dc91b0b7a8\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.860100 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/51f12264-af08-4cf2-9e76-98dc91b0b7a8-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"51f12264-af08-4cf2-9e76-98dc91b0b7a8\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.860130 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a320041-5efb-4a26-b9e4-cdf85da40717-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"0a320041-5efb-4a26-b9e4-cdf85da40717\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.860196 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"51f12264-af08-4cf2-9e76-98dc91b0b7a8\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.860224 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a320041-5efb-4a26-b9e4-cdf85da40717-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"0a320041-5efb-4a26-b9e4-cdf85da40717\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.860251 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/0a320041-5efb-4a26-b9e4-cdf85da40717-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"0a320041-5efb-4a26-b9e4-cdf85da40717\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.860273 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51f12264-af08-4cf2-9e76-98dc91b0b7a8-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"51f12264-af08-4cf2-9e76-98dc91b0b7a8\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.860289 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"0a320041-5efb-4a26-b9e4-cdf85da40717\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.860432 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"51f12264-af08-4cf2-9e76-98dc91b0b7a8\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.860827 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"51f12264-af08-4cf2-9e76-98dc91b0b7a8\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.861087 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51f12264-af08-4cf2-9e76-98dc91b0b7a8-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"51f12264-af08-4cf2-9e76-98dc91b0b7a8\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.861893 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51f12264-af08-4cf2-9e76-98dc91b0b7a8-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"51f12264-af08-4cf2-9e76-98dc91b0b7a8\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.865602 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/51f12264-af08-4cf2-9e76-98dc91b0b7a8-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"51f12264-af08-4cf2-9e76-98dc91b0b7a8\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.865835 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/51f12264-af08-4cf2-9e76-98dc91b0b7a8-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"51f12264-af08-4cf2-9e76-98dc91b0b7a8\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.868593 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/51f12264-af08-4cf2-9e76-98dc91b0b7a8-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"51f12264-af08-4cf2-9e76-98dc91b0b7a8\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.879195 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2t5w\" (UniqueName: \"kubernetes.io/projected/51f12264-af08-4cf2-9e76-98dc91b0b7a8-kube-api-access-z2t5w\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"51f12264-af08-4cf2-9e76-98dc91b0b7a8\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.886490 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"51f12264-af08-4cf2-9e76-98dc91b0b7a8\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.891785 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"51f12264-af08-4cf2-9e76-98dc91b0b7a8\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.917677 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.921666 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.926782 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-http" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.927022 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-grpc" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.927298 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.962142 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/0a320041-5efb-4a26-b9e4-cdf85da40717-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"0a320041-5efb-4a26-b9e4-cdf85da40717\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.962191 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/33486bd3-170e-428a-ab58-dd7bd52e6a53-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"33486bd3-170e-428a-ab58-dd7bd52e6a53\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.962246 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/33486bd3-170e-428a-ab58-dd7bd52e6a53-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"33486bd3-170e-428a-ab58-dd7bd52e6a53\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.962280 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"33486bd3-170e-428a-ab58-dd7bd52e6a53\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.962318 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a320041-5efb-4a26-b9e4-cdf85da40717-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"0a320041-5efb-4a26-b9e4-cdf85da40717\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.962357 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/33486bd3-170e-428a-ab58-dd7bd52e6a53-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"33486bd3-170e-428a-ab58-dd7bd52e6a53\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.962394 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a320041-5efb-4a26-b9e4-cdf85da40717-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"0a320041-5efb-4a26-b9e4-cdf85da40717\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.962427 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/0a320041-5efb-4a26-b9e4-cdf85da40717-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"0a320041-5efb-4a26-b9e4-cdf85da40717\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.962540 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"0a320041-5efb-4a26-b9e4-cdf85da40717\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.962587 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/0a320041-5efb-4a26-b9e4-cdf85da40717-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"0a320041-5efb-4a26-b9e4-cdf85da40717\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.962651 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9t8l\" (UniqueName: \"kubernetes.io/projected/33486bd3-170e-428a-ab58-dd7bd52e6a53-kube-api-access-l9t8l\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"33486bd3-170e-428a-ab58-dd7bd52e6a53\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.962682 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33486bd3-170e-428a-ab58-dd7bd52e6a53-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"33486bd3-170e-428a-ab58-dd7bd52e6a53\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.962742 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33486bd3-170e-428a-ab58-dd7bd52e6a53-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"33486bd3-170e-428a-ab58-dd7bd52e6a53\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.962848 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48xwd\" (UniqueName: \"kubernetes.io/projected/0a320041-5efb-4a26-b9e4-cdf85da40717-kube-api-access-48xwd\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"0a320041-5efb-4a26-b9e4-cdf85da40717\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.963502 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"0a320041-5efb-4a26-b9e4-cdf85da40717\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.964117 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a320041-5efb-4a26-b9e4-cdf85da40717-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"0a320041-5efb-4a26-b9e4-cdf85da40717\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.964200 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a320041-5efb-4a26-b9e4-cdf85da40717-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"0a320041-5efb-4a26-b9e4-cdf85da40717\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.966328 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/0a320041-5efb-4a26-b9e4-cdf85da40717-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"0a320041-5efb-4a26-b9e4-cdf85da40717\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.967190 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/0a320041-5efb-4a26-b9e4-cdf85da40717-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"0a320041-5efb-4a26-b9e4-cdf85da40717\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.981311 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/0a320041-5efb-4a26-b9e4-cdf85da40717-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"0a320041-5efb-4a26-b9e4-cdf85da40717\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.984507 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48xwd\" (UniqueName: \"kubernetes.io/projected/0a320041-5efb-4a26-b9e4-cdf85da40717-kube-api-access-48xwd\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"0a320041-5efb-4a26-b9e4-cdf85da40717\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.988703 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"0a320041-5efb-4a26-b9e4-cdf85da40717\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 13:52:20 crc kubenswrapper[4812]: I0216 13:52:20.996315 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 13:52:21 crc kubenswrapper[4812]: I0216 13:52:21.064855 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9t8l\" (UniqueName: \"kubernetes.io/projected/33486bd3-170e-428a-ab58-dd7bd52e6a53-kube-api-access-l9t8l\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"33486bd3-170e-428a-ab58-dd7bd52e6a53\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 13:52:21 crc kubenswrapper[4812]: I0216 13:52:21.065380 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33486bd3-170e-428a-ab58-dd7bd52e6a53-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"33486bd3-170e-428a-ab58-dd7bd52e6a53\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 13:52:21 crc kubenswrapper[4812]: I0216 13:52:21.065617 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33486bd3-170e-428a-ab58-dd7bd52e6a53-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"33486bd3-170e-428a-ab58-dd7bd52e6a53\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 13:52:21 crc kubenswrapper[4812]: I0216 13:52:21.065807 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/33486bd3-170e-428a-ab58-dd7bd52e6a53-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"33486bd3-170e-428a-ab58-dd7bd52e6a53\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 13:52:21 crc kubenswrapper[4812]: I0216 13:52:21.066002 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/33486bd3-170e-428a-ab58-dd7bd52e6a53-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"33486bd3-170e-428a-ab58-dd7bd52e6a53\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 13:52:21 crc kubenswrapper[4812]: I0216 13:52:21.066183 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33486bd3-170e-428a-ab58-dd7bd52e6a53-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"33486bd3-170e-428a-ab58-dd7bd52e6a53\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 13:52:21 crc kubenswrapper[4812]: I0216 13:52:21.066480 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"33486bd3-170e-428a-ab58-dd7bd52e6a53\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 13:52:21 crc kubenswrapper[4812]: I0216 13:52:21.066645 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"33486bd3-170e-428a-ab58-dd7bd52e6a53\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 13:52:21 crc kubenswrapper[4812]: I0216 13:52:21.066759 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/33486bd3-170e-428a-ab58-dd7bd52e6a53-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"33486bd3-170e-428a-ab58-dd7bd52e6a53\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 13:52:21 crc kubenswrapper[4812]: I0216 13:52:21.066762 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33486bd3-170e-428a-ab58-dd7bd52e6a53-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"33486bd3-170e-428a-ab58-dd7bd52e6a53\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 13:52:21 crc kubenswrapper[4812]: I0216 13:52:21.069821 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/33486bd3-170e-428a-ab58-dd7bd52e6a53-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"33486bd3-170e-428a-ab58-dd7bd52e6a53\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 13:52:21 crc kubenswrapper[4812]: I0216 13:52:21.070044 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/33486bd3-170e-428a-ab58-dd7bd52e6a53-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"33486bd3-170e-428a-ab58-dd7bd52e6a53\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 13:52:21 crc kubenswrapper[4812]: I0216 13:52:21.071065 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/33486bd3-170e-428a-ab58-dd7bd52e6a53-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"33486bd3-170e-428a-ab58-dd7bd52e6a53\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 13:52:21 crc kubenswrapper[4812]: I0216 13:52:21.081326 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9t8l\" (UniqueName: \"kubernetes.io/projected/33486bd3-170e-428a-ab58-dd7bd52e6a53-kube-api-access-l9t8l\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"33486bd3-170e-428a-ab58-dd7bd52e6a53\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 13:52:21 crc kubenswrapper[4812]: I0216 13:52:21.092941 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"33486bd3-170e-428a-ab58-dd7bd52e6a53\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 13:52:21 crc kubenswrapper[4812]: I0216 13:52:21.124243 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 13:52:21 crc kubenswrapper[4812]: I0216 13:52:21.256598 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 13:52:21 crc kubenswrapper[4812]: W0216 13:52:21.330745 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95382144_b401_41b0_bf26_8a5503df91f6.slice/crio-76b816a26e4d010f2d843b7f45b8fd232489a24faf4d27daf746b5b54e1d2756 WatchSource:0}: Error finding container 76b816a26e4d010f2d843b7f45b8fd232489a24faf4d27daf746b5b54e1d2756: Status 404 returned error can't find the container with id 76b816a26e4d010f2d843b7f45b8fd232489a24faf4d27daf746b5b54e1d2756 Feb 16 13:52:21 crc kubenswrapper[4812]: I0216 13:52:21.454794 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"95382144-b401-41b0-bf26-8a5503df91f6","Type":"ContainerStarted","Data":"76b816a26e4d010f2d843b7f45b8fd232489a24faf4d27daf746b5b54e1d2756"} Feb 16 13:52:21 crc kubenswrapper[4812]: I0216 13:52:21.456296 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7","Type":"ContainerStarted","Data":"0372346e522e8ab118ed87dd07b91924c03dc512b53b5138d0e68e7f4880b2d3"} Feb 16 13:52:25 crc kubenswrapper[4812]: E0216 13:52:25.767926 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 16 13:52:25 crc kubenswrapper[4812]: E0216 13:52:25.770542 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wnk78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 13:52:25 crc kubenswrapper[4812]: E0216 13:52:25.772117 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1" Feb 16 13:52:26 crc kubenswrapper[4812]: E0216 13:52:26.510529 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1" Feb 16 13:52:26 crc kubenswrapper[4812]: I0216 13:52:26.671647 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7dzhm"] Feb 16 13:52:35 crc kubenswrapper[4812]: W0216 13:52:35.583265 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ebd3c08_88e8_4b5d_9ce9_9386b2c4db70.slice/crio-372c3ec46110a5d9bdd2bedc8ab74eb63a7a5ec7dcfb9bbad706596509f9f75a WatchSource:0}: Error finding container 372c3ec46110a5d9bdd2bedc8ab74eb63a7a5ec7dcfb9bbad706596509f9f75a: Status 404 returned error can't find the container with id 372c3ec46110a5d9bdd2bedc8ab74eb63a7a5ec7dcfb9bbad706596509f9f75a Feb 16 13:52:36 crc kubenswrapper[4812]: I0216 13:52:36.003379 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7dzhm" event={"ID":"2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70","Type":"ContainerStarted","Data":"372c3ec46110a5d9bdd2bedc8ab74eb63a7a5ec7dcfb9bbad706596509f9f75a"} Feb 16 13:52:36 crc kubenswrapper[4812]: I0216 13:52:36.110259 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 16 13:52:36 crc kubenswrapper[4812]: I0216 13:52:36.406511 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 13:52:37 crc kubenswrapper[4812]: E0216 13:52:37.354331 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 13:52:37 crc kubenswrapper[4812]: E0216 13:52:37.354812 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nq4bt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-gk4pg_openstack(d03c6dc6-9a9e-4bce-9b7b-96ebddab6f48): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 13:52:37 crc kubenswrapper[4812]: E0216 13:52:37.356974 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-gk4pg" podUID="d03c6dc6-9a9e-4bce-9b7b-96ebddab6f48" Feb 16 13:52:37 crc kubenswrapper[4812]: E0216 13:52:37.367747 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 13:52:37 crc kubenswrapper[4812]: E0216 13:52:37.367898 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h928m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-vr5nz_openstack(376ebce7-276d-48c2-8b87-9d3389fd60f4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 13:52:37 crc kubenswrapper[4812]: E0216 13:52:37.369215 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-vr5nz" podUID="376ebce7-276d-48c2-8b87-9d3389fd60f4" Feb 16 13:52:37 crc kubenswrapper[4812]: E0216 13:52:37.371108 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 13:52:37 crc kubenswrapper[4812]: E0216 13:52:37.371344 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-chgwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-gl6hc_openstack(1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 13:52:37 crc kubenswrapper[4812]: E0216 13:52:37.373081 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-gl6hc" podUID="1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3" Feb 16 13:52:37 crc kubenswrapper[4812]: E0216 13:52:37.400210 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 13:52:37 crc kubenswrapper[4812]: E0216 13:52:37.400429 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-prsk4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-5d2mw_openstack(5e9d026d-fcd8-49b3-8268-8a9e59f077d0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 13:52:37 crc kubenswrapper[4812]: E0216 13:52:37.402171 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-5d2mw" podUID="5e9d026d-fcd8-49b3-8268-8a9e59f077d0" Feb 16 13:52:38 crc kubenswrapper[4812]: I0216 13:52:38.181356 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516","Type":"ContainerStarted","Data":"ae1135a9cb77c28375e68406d9ca88791e27c8b7f8ea7059b22cd76319ee1a4b"} Feb 16 13:52:38 crc kubenswrapper[4812]: I0216 13:52:38.184292 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"96cb02af-deed-4da5-96cf-28d69592caed","Type":"ContainerStarted","Data":"f2f2e0c94284753670b433a993eba7bfc4cb78a318593eaced05f9c9db94cea1"} Feb 16 13:52:38 crc kubenswrapper[4812]: E0216 13:52:38.183279 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-gl6hc" podUID="1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3" Feb 16 13:52:38 crc kubenswrapper[4812]: E0216 13:52:38.187298 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-5d2mw" podUID="5e9d026d-fcd8-49b3-8268-8a9e59f077d0" Feb 16 13:52:39 crc kubenswrapper[4812]: I0216 13:52:39.211600 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"11179909-1e24-429d-9d33-e2c448e1cf6b","Type":"ContainerStarted","Data":"379be69656fdbc87342e3811c864d67a97c599f3112dfc5fd6053e5f404254d6"} Feb 16 13:52:39 crc kubenswrapper[4812]: I0216 13:52:39.657569 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-p88ww"] Feb 16 13:52:39 crc kubenswrapper[4812]: W0216 13:52:39.665428 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33486bd3_170e_428a_ab58_dd7bd52e6a53.slice/crio-f5d1f765b8df3f6da98af28a452520e625ddefaa6b4ccb617501c34fdbd6f86e WatchSource:0}: Error finding container f5d1f765b8df3f6da98af28a452520e625ddefaa6b4ccb617501c34fdbd6f86e: Status 404 returned error can't find the container with id f5d1f765b8df3f6da98af28a452520e625ddefaa6b4ccb617501c34fdbd6f86e Feb 16 13:52:39 crc kubenswrapper[4812]: W0216 13:52:39.670752 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod826ded0a_246d_40b7_87d1_22fa8224d506.slice/crio-0d9b4caf91bad3ed74db89c6a29fe582cfeff3699a4d580aea888e45261edb91 WatchSource:0}: Error finding container 0d9b4caf91bad3ed74db89c6a29fe582cfeff3699a4d580aea888e45261edb91: Status 404 returned error can't find the container with id 0d9b4caf91bad3ed74db89c6a29fe582cfeff3699a4d580aea888e45261edb91 Feb 16 13:52:39 crc kubenswrapper[4812]: I0216 13:52:39.678519 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 16 13:52:39 crc kubenswrapper[4812]: I0216 13:52:39.714346 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-vr5nz" Feb 16 13:52:39 crc kubenswrapper[4812]: I0216 13:52:39.739690 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-gk4pg" Feb 16 13:52:39 crc kubenswrapper[4812]: I0216 13:52:39.900037 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d03c6dc6-9a9e-4bce-9b7b-96ebddab6f48-config\") pod \"d03c6dc6-9a9e-4bce-9b7b-96ebddab6f48\" (UID: \"d03c6dc6-9a9e-4bce-9b7b-96ebddab6f48\") " Feb 16 13:52:39 crc kubenswrapper[4812]: I0216 13:52:39.900105 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/376ebce7-276d-48c2-8b87-9d3389fd60f4-dns-svc\") pod \"376ebce7-276d-48c2-8b87-9d3389fd60f4\" (UID: \"376ebce7-276d-48c2-8b87-9d3389fd60f4\") " Feb 16 13:52:39 crc kubenswrapper[4812]: I0216 13:52:39.900164 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/376ebce7-276d-48c2-8b87-9d3389fd60f4-config\") pod \"376ebce7-276d-48c2-8b87-9d3389fd60f4\" (UID: \"376ebce7-276d-48c2-8b87-9d3389fd60f4\") " Feb 16 13:52:39 crc kubenswrapper[4812]: I0216 13:52:39.900203 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h928m\" (UniqueName: \"kubernetes.io/projected/376ebce7-276d-48c2-8b87-9d3389fd60f4-kube-api-access-h928m\") pod \"376ebce7-276d-48c2-8b87-9d3389fd60f4\" (UID: \"376ebce7-276d-48c2-8b87-9d3389fd60f4\") " Feb 16 13:52:39 crc kubenswrapper[4812]: I0216 13:52:39.900231 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nq4bt\" (UniqueName: \"kubernetes.io/projected/d03c6dc6-9a9e-4bce-9b7b-96ebddab6f48-kube-api-access-nq4bt\") pod \"d03c6dc6-9a9e-4bce-9b7b-96ebddab6f48\" (UID: \"d03c6dc6-9a9e-4bce-9b7b-96ebddab6f48\") " Feb 16 13:52:39 crc kubenswrapper[4812]: I0216 13:52:39.901633 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/376ebce7-276d-48c2-8b87-9d3389fd60f4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "376ebce7-276d-48c2-8b87-9d3389fd60f4" (UID: "376ebce7-276d-48c2-8b87-9d3389fd60f4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:52:39 crc kubenswrapper[4812]: I0216 13:52:39.902081 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d03c6dc6-9a9e-4bce-9b7b-96ebddab6f48-config" (OuterVolumeSpecName: "config") pod "d03c6dc6-9a9e-4bce-9b7b-96ebddab6f48" (UID: "d03c6dc6-9a9e-4bce-9b7b-96ebddab6f48"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:52:39 crc kubenswrapper[4812]: I0216 13:52:39.902609 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/376ebce7-276d-48c2-8b87-9d3389fd60f4-config" (OuterVolumeSpecName: "config") pod "376ebce7-276d-48c2-8b87-9d3389fd60f4" (UID: "376ebce7-276d-48c2-8b87-9d3389fd60f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:52:39 crc kubenswrapper[4812]: I0216 13:52:39.908866 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d03c6dc6-9a9e-4bce-9b7b-96ebddab6f48-kube-api-access-nq4bt" (OuterVolumeSpecName: "kube-api-access-nq4bt") pod "d03c6dc6-9a9e-4bce-9b7b-96ebddab6f48" (UID: "d03c6dc6-9a9e-4bce-9b7b-96ebddab6f48"). InnerVolumeSpecName "kube-api-access-nq4bt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:52:39 crc kubenswrapper[4812]: I0216 13:52:39.909769 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/376ebce7-276d-48c2-8b87-9d3389fd60f4-kube-api-access-h928m" (OuterVolumeSpecName: "kube-api-access-h928m") pod "376ebce7-276d-48c2-8b87-9d3389fd60f4" (UID: "376ebce7-276d-48c2-8b87-9d3389fd60f4"). InnerVolumeSpecName "kube-api-access-h928m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:52:40 crc kubenswrapper[4812]: I0216 13:52:40.003415 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d03c6dc6-9a9e-4bce-9b7b-96ebddab6f48-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:52:40 crc kubenswrapper[4812]: I0216 13:52:40.003473 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/376ebce7-276d-48c2-8b87-9d3389fd60f4-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:52:40 crc kubenswrapper[4812]: I0216 13:52:40.003482 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/376ebce7-276d-48c2-8b87-9d3389fd60f4-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:52:40 crc kubenswrapper[4812]: I0216 13:52:40.003491 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h928m\" (UniqueName: \"kubernetes.io/projected/376ebce7-276d-48c2-8b87-9d3389fd60f4-kube-api-access-h928m\") on node \"crc\" DevicePath \"\"" Feb 16 13:52:40 crc kubenswrapper[4812]: I0216 13:52:40.003501 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nq4bt\" (UniqueName: \"kubernetes.io/projected/d03c6dc6-9a9e-4bce-9b7b-96ebddab6f48-kube-api-access-nq4bt\") on node \"crc\" DevicePath \"\"" Feb 16 13:52:40 crc kubenswrapper[4812]: I0216 13:52:40.118327 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 13:52:40 crc kubenswrapper[4812]: I0216 13:52:40.136926 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 16 13:52:40 crc kubenswrapper[4812]: I0216 13:52:40.151233 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-6xb2f"] Feb 16 13:52:40 crc kubenswrapper[4812]: I0216 13:52:40.163705 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-j48px"] Feb 16 13:52:40 crc kubenswrapper[4812]: I0216 13:52:40.178682 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-2l86h"] Feb 16 13:52:40 crc kubenswrapper[4812]: W0216 13:52:40.179219 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcef2c2bd_5dea_4bf2_8fcf_a3cadc541023.slice/crio-9fa1dabcb996dbe2a88c015d3c21493e0cfa3a9d647a6841f8d7a762bdd00bad WatchSource:0}: Error finding container 9fa1dabcb996dbe2a88c015d3c21493e0cfa3a9d647a6841f8d7a762bdd00bad: Status 404 returned error can't find the container with id 9fa1dabcb996dbe2a88c015d3c21493e0cfa3a9d647a6841f8d7a762bdd00bad Feb 16 13:52:40 crc kubenswrapper[4812]: I0216 13:52:40.187897 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 16 13:52:40 crc kubenswrapper[4812]: I0216 13:52:40.194206 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 13:52:40 crc kubenswrapper[4812]: I0216 13:52:40.220540 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-hjxr5"] Feb 16 13:52:40 crc kubenswrapper[4812]: I0216 13:52:40.231467 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-hb5vr"] Feb 16 13:52:40 crc kubenswrapper[4812]: I0216 13:52:40.547832 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6039f662-e9ac-455c-b4da-9bcbe34e1396","Type":"ContainerStarted","Data":"04e5804e0a98d60afe9cb7e7e16d8f1527ea077e9314bc55bf83714bc04c0a81"} Feb 16 13:52:40 crc kubenswrapper[4812]: I0216 13:52:40.551566 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e02a9868-e12c-4a65-9ba5-4a5965131b5b","Type":"ContainerStarted","Data":"e2ad9e3dd430b14f205a07693b722611e0cbc95942123bb2b102dc8086123d7f"} Feb 16 13:52:40 crc kubenswrapper[4812]: I0216 13:52:40.555338 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-j48px" event={"ID":"cef2c2bd-5dea-4bf2-8fcf-a3cadc541023","Type":"ContainerStarted","Data":"9fa1dabcb996dbe2a88c015d3c21493e0cfa3a9d647a6841f8d7a762bdd00bad"} Feb 16 13:52:40 crc kubenswrapper[4812]: I0216 13:52:40.557285 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"33486bd3-170e-428a-ab58-dd7bd52e6a53","Type":"ContainerStarted","Data":"f5d1f765b8df3f6da98af28a452520e625ddefaa6b4ccb617501c34fdbd6f86e"} Feb 16 13:52:40 crc kubenswrapper[4812]: I0216 13:52:40.569176 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7","Type":"ContainerStarted","Data":"691fda13371709e478f20ef9ed05cfcae96c445fa0ef9007fbd96ea576863506"} Feb 16 13:52:40 crc kubenswrapper[4812]: I0216 13:52:40.577927 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f00dce1e-5743-4129-b78b-4a29351da7ed","Type":"ContainerStarted","Data":"2f89168b209403e8c4aac4914ae81b28d57a9c2bf93aa494cb245990b3af7ba1"} Feb 16 13:52:40 crc kubenswrapper[4812]: I0216 13:52:40.578976 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-vr5nz" event={"ID":"376ebce7-276d-48c2-8b87-9d3389fd60f4","Type":"ContainerDied","Data":"770fa4bbb625eb4bf247d0526b38347292ebe2af151e039c8fcf12f3412d23f3"} Feb 16 13:52:40 crc kubenswrapper[4812]: I0216 13:52:40.579110 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-vr5nz" Feb 16 13:52:40 crc kubenswrapper[4812]: I0216 13:52:40.581418 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6xb2f" event={"ID":"a89a28de-f5cd-413a-bbbc-0f58c3f5fd1f","Type":"ContainerStarted","Data":"b418d5f8378e3fa23402499d644fe117b581648ba5f84c190e9b86a4b39ac21f"} Feb 16 13:52:40 crc kubenswrapper[4812]: I0216 13:52:40.582509 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"51f12264-af08-4cf2-9e76-98dc91b0b7a8","Type":"ContainerStarted","Data":"a1b3aaf293bda39c785a42858f0cc69a33bc7989f3078c5586d257e1ec884501"} Feb 16 13:52:40 crc kubenswrapper[4812]: I0216 13:52:40.588264 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"0a320041-5efb-4a26-b9e4-cdf85da40717","Type":"ContainerStarted","Data":"356508fd1356e9e7cc2316acd60fef070fe13d7716c1090c83bbdb42a4da66b0"} Feb 16 13:52:40 crc kubenswrapper[4812]: I0216 13:52:40.595754 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-gk4pg" event={"ID":"d03c6dc6-9a9e-4bce-9b7b-96ebddab6f48","Type":"ContainerDied","Data":"736739f7f43c1bb3b700195b12ef7e6dc7aab7d8489306d79037323e7af93bb9"} Feb 16 13:52:40 crc kubenswrapper[4812]: I0216 13:52:40.595815 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-gk4pg" Feb 16 13:52:40 crc kubenswrapper[4812]: I0216 13:52:40.599849 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"95382144-b401-41b0-bf26-8a5503df91f6","Type":"ContainerStarted","Data":"2150065b4e04d168d0b28b993a208fefce73407b65fd79d78ba5c1b957a3827f"} Feb 16 13:52:40 crc kubenswrapper[4812]: I0216 13:52:40.600746 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 16 13:52:40 crc kubenswrapper[4812]: I0216 13:52:40.602030 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-p88ww" event={"ID":"826ded0a-246d-40b7-87d1-22fa8224d506","Type":"ContainerStarted","Data":"0d9b4caf91bad3ed74db89c6a29fe582cfeff3699a4d580aea888e45261edb91"} Feb 16 13:52:40 crc kubenswrapper[4812]: I0216 13:52:40.644682 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=19.578260329 podStartE2EDuration="35.644659488s" podCreationTimestamp="2026-02-16 13:52:05 +0000 UTC" firstStartedPulling="2026-02-16 13:52:21.333802588 +0000 UTC m=+1230.398133289" lastFinishedPulling="2026-02-16 13:52:37.400201737 +0000 UTC m=+1246.464532448" observedRunningTime="2026-02-16 13:52:40.641554068 +0000 UTC m=+1249.705884789" watchObservedRunningTime="2026-02-16 13:52:40.644659488 +0000 UTC m=+1249.708990189" Feb 16 13:52:40 crc kubenswrapper[4812]: I0216 13:52:40.714165 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vr5nz"] Feb 16 13:52:40 crc kubenswrapper[4812]: I0216 13:52:40.722521 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vr5nz"] Feb 16 13:52:40 crc kubenswrapper[4812]: I0216 13:52:40.739176 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-gk4pg"] Feb 16 13:52:40 crc kubenswrapper[4812]: I0216 13:52:40.747415 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-gk4pg"] Feb 16 13:52:41 crc kubenswrapper[4812]: I0216 13:52:41.154980 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 13:52:41 crc kubenswrapper[4812]: W0216 13:52:41.162281 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7eae7df6_e3b7_4ac5_bb18_6b781744747d.slice/crio-5da014e4e76bcd75a67c998dba1fd484cd91febc56d0165631d05332bb98513c WatchSource:0}: Error finding container 5da014e4e76bcd75a67c998dba1fd484cd91febc56d0165631d05332bb98513c: Status 404 returned error can't find the container with id 5da014e4e76bcd75a67c998dba1fd484cd91febc56d0165631d05332bb98513c Feb 16 13:52:41 crc kubenswrapper[4812]: I0216 13:52:41.611147 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-2l86h" event={"ID":"d909c793-0634-48f0-8f71-4f21dc9979af","Type":"ContainerStarted","Data":"d0786d01eceb60b483c147f28dcbe958c7ef77304e87e886a2b332f2c03787a1"} Feb 16 13:52:41 crc kubenswrapper[4812]: I0216 13:52:41.612549 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hb5vr" event={"ID":"6d8ae81a-a9ec-4f2f-8369-0164c6c1923c","Type":"ContainerStarted","Data":"0a8cd37f4ea472ea5a0b6a595b77f9fb33ccdd9f4edbb443d445817ca1e61db8"} Feb 16 13:52:41 crc kubenswrapper[4812]: I0216 13:52:41.614412 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hjxr5" event={"ID":"619a5cb7-30a8-4ac4-955e-d2c97ce49fda","Type":"ContainerStarted","Data":"96131a6518db5b4a329d77f1bb2781638a571bb975b6ba7555a6dfec8f3195c4"} Feb 16 13:52:41 crc kubenswrapper[4812]: I0216 13:52:41.615936 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7eae7df6-e3b7-4ac5-bb18-6b781744747d","Type":"ContainerStarted","Data":"5da014e4e76bcd75a67c998dba1fd484cd91febc56d0165631d05332bb98513c"} Feb 16 13:52:41 crc kubenswrapper[4812]: I0216 13:52:41.618205 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1","Type":"ContainerStarted","Data":"824ff668f9c6354c85701d1e4c195c81446c13d5dd3370e5e033d693821ab5d1"} Feb 16 13:52:41 crc kubenswrapper[4812]: I0216 13:52:41.898309 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="376ebce7-276d-48c2-8b87-9d3389fd60f4" path="/var/lib/kubelet/pods/376ebce7-276d-48c2-8b87-9d3389fd60f4/volumes" Feb 16 13:52:41 crc kubenswrapper[4812]: I0216 13:52:41.899852 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d03c6dc6-9a9e-4bce-9b7b-96ebddab6f48" path="/var/lib/kubelet/pods/d03c6dc6-9a9e-4bce-9b7b-96ebddab6f48/volumes" Feb 16 13:52:43 crc kubenswrapper[4812]: I0216 13:52:43.643603 4812 generic.go:334] "Generic (PLEG): container finished" podID="d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7" containerID="691fda13371709e478f20ef9ed05cfcae96c445fa0ef9007fbd96ea576863506" exitCode=0 Feb 16 13:52:43 crc kubenswrapper[4812]: I0216 13:52:43.643776 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7","Type":"ContainerDied","Data":"691fda13371709e478f20ef9ed05cfcae96c445fa0ef9007fbd96ea576863506"} Feb 16 13:52:43 crc kubenswrapper[4812]: I0216 13:52:43.646337 4812 generic.go:334] "Generic (PLEG): container finished" podID="11179909-1e24-429d-9d33-e2c448e1cf6b" containerID="379be69656fdbc87342e3811c864d67a97c599f3112dfc5fd6053e5f404254d6" exitCode=0 Feb 16 13:52:43 crc kubenswrapper[4812]: I0216 13:52:43.646363 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"11179909-1e24-429d-9d33-e2c448e1cf6b","Type":"ContainerDied","Data":"379be69656fdbc87342e3811c864d67a97c599f3112dfc5fd6053e5f404254d6"} Feb 16 13:52:46 crc kubenswrapper[4812]: I0216 13:52:46.277648 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 16 13:52:47 crc kubenswrapper[4812]: I0216 13:52:47.682161 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"11179909-1e24-429d-9d33-e2c448e1cf6b","Type":"ContainerStarted","Data":"a01952ac8c2883ce376029e9b6c919d3144ec7f77fb722bd3ed5bacd27c67266"} Feb 16 13:52:47 crc kubenswrapper[4812]: I0216 13:52:47.690386 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"0a320041-5efb-4a26-b9e4-cdf85da40717","Type":"ContainerStarted","Data":"441826b50a3c1915ca0b5d5fef21ab76af62606f8e08cb7f34a1232a6522c482"} Feb 16 13:52:47 crc kubenswrapper[4812]: I0216 13:52:47.690525 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 13:52:47 crc kubenswrapper[4812]: I0216 13:52:47.692596 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-2l86h" event={"ID":"d909c793-0634-48f0-8f71-4f21dc9979af","Type":"ContainerStarted","Data":"24378422a6db4096adfdcb317ff5b7613df91699e7d4cfdd95fdca20154a0d45"} Feb 16 13:52:47 crc kubenswrapper[4812]: I0216 13:52:47.692728 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-2l86h" Feb 16 13:52:47 crc kubenswrapper[4812]: I0216 13:52:47.700182 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"33486bd3-170e-428a-ab58-dd7bd52e6a53","Type":"ContainerStarted","Data":"5ec935f32e21249ff7a059b5c31d2c330077e2a7f70ba17f09ff15e93a6bde41"} Feb 16 13:52:47 crc kubenswrapper[4812]: I0216 13:52:47.700240 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 13:52:47 crc kubenswrapper[4812]: I0216 13:52:47.706217 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7","Type":"ContainerStarted","Data":"e4915b1e9a9bbf004d59999108e7f7a9c5df088fd4217902625043b53c991920"} Feb 16 13:52:47 crc kubenswrapper[4812]: I0216 13:52:47.713201 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=12.882837322 podStartE2EDuration="44.713180203s" podCreationTimestamp="2026-02-16 13:52:03 +0000 UTC" firstStartedPulling="2026-02-16 13:52:05.58318 +0000 UTC m=+1214.647510701" lastFinishedPulling="2026-02-16 13:52:37.413522881 +0000 UTC m=+1246.477853582" observedRunningTime="2026-02-16 13:52:47.711110803 +0000 UTC m=+1256.775441504" watchObservedRunningTime="2026-02-16 13:52:47.713180203 +0000 UTC m=+1256.777510914" Feb 16 13:52:47 crc kubenswrapper[4812]: I0216 13:52:47.713744 4812 generic.go:334] "Generic (PLEG): container finished" podID="619a5cb7-30a8-4ac4-955e-d2c97ce49fda" containerID="d2e46969d0af24422a7f82947ea73980e7eb7f407bde4016b4d81dd70783c85a" exitCode=0 Feb 16 13:52:47 crc kubenswrapper[4812]: I0216 13:52:47.713800 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hjxr5" event={"ID":"619a5cb7-30a8-4ac4-955e-d2c97ce49fda","Type":"ContainerDied","Data":"d2e46969d0af24422a7f82947ea73980e7eb7f407bde4016b4d81dd70783c85a"} Feb 16 13:52:47 crc kubenswrapper[4812]: I0216 13:52:47.760166 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-compactor-0" podStartSLOduration=22.391161774 podStartE2EDuration="28.760143497s" podCreationTimestamp="2026-02-16 13:52:19 +0000 UTC" firstStartedPulling="2026-02-16 13:52:40.156628902 +0000 UTC m=+1249.220959613" lastFinishedPulling="2026-02-16 13:52:46.525610635 +0000 UTC m=+1255.589941336" observedRunningTime="2026-02-16 13:52:47.75472415 +0000 UTC m=+1256.819054871" watchObservedRunningTime="2026-02-16 13:52:47.760143497 +0000 UTC m=+1256.824474208" Feb 16 13:52:47 crc kubenswrapper[4812]: I0216 13:52:47.766060 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-index-gateway-0" podStartSLOduration=22.336009722 podStartE2EDuration="28.766041106s" podCreationTimestamp="2026-02-16 13:52:19 +0000 UTC" firstStartedPulling="2026-02-16 13:52:39.667981118 +0000 UTC m=+1248.732311819" lastFinishedPulling="2026-02-16 13:52:46.098012502 +0000 UTC m=+1255.162343203" observedRunningTime="2026-02-16 13:52:47.737740871 +0000 UTC m=+1256.802071572" watchObservedRunningTime="2026-02-16 13:52:47.766041106 +0000 UTC m=+1256.830371807" Feb 16 13:52:47 crc kubenswrapper[4812]: I0216 13:52:47.785433 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-2l86h" podStartSLOduration=22.816263885 podStartE2EDuration="28.785414255s" podCreationTimestamp="2026-02-16 13:52:19 +0000 UTC" firstStartedPulling="2026-02-16 13:52:40.555605041 +0000 UTC m=+1249.619935732" lastFinishedPulling="2026-02-16 13:52:46.524755401 +0000 UTC m=+1255.589086102" observedRunningTime="2026-02-16 13:52:47.778755393 +0000 UTC m=+1256.843086104" watchObservedRunningTime="2026-02-16 13:52:47.785414255 +0000 UTC m=+1256.849744946" Feb 16 13:52:47 crc kubenswrapper[4812]: I0216 13:52:47.810013 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=27.690149854 podStartE2EDuration="43.809994773s" podCreationTimestamp="2026-02-16 13:52:04 +0000 UTC" firstStartedPulling="2026-02-16 13:52:21.333782828 +0000 UTC m=+1230.398113539" lastFinishedPulling="2026-02-16 13:52:37.453627757 +0000 UTC m=+1246.517958458" observedRunningTime="2026-02-16 13:52:47.800642134 +0000 UTC m=+1256.864972855" watchObservedRunningTime="2026-02-16 13:52:47.809994773 +0000 UTC m=+1256.874325474" Feb 16 13:52:48 crc kubenswrapper[4812]: I0216 13:52:48.481111 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5d2mw"] Feb 16 13:52:48 crc kubenswrapper[4812]: I0216 13:52:48.541511 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-8jm9c"] Feb 16 13:52:48 crc kubenswrapper[4812]: I0216 13:52:48.543053 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-8jm9c" Feb 16 13:52:48 crc kubenswrapper[4812]: I0216 13:52:48.550384 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-8jm9c"] Feb 16 13:52:48 crc kubenswrapper[4812]: I0216 13:52:48.634193 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4702718-14ef-4b62-acfa-016d0a04a952-config\") pod \"dnsmasq-dns-7cb5889db5-8jm9c\" (UID: \"c4702718-14ef-4b62-acfa-016d0a04a952\") " pod="openstack/dnsmasq-dns-7cb5889db5-8jm9c" Feb 16 13:52:48 crc kubenswrapper[4812]: I0216 13:52:48.634334 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4702718-14ef-4b62-acfa-016d0a04a952-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-8jm9c\" (UID: \"c4702718-14ef-4b62-acfa-016d0a04a952\") " pod="openstack/dnsmasq-dns-7cb5889db5-8jm9c" Feb 16 13:52:48 crc kubenswrapper[4812]: I0216 13:52:48.634405 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkxbb\" (UniqueName: \"kubernetes.io/projected/c4702718-14ef-4b62-acfa-016d0a04a952-kube-api-access-nkxbb\") pod \"dnsmasq-dns-7cb5889db5-8jm9c\" (UID: \"c4702718-14ef-4b62-acfa-016d0a04a952\") " pod="openstack/dnsmasq-dns-7cb5889db5-8jm9c" Feb 16 13:52:48 crc kubenswrapper[4812]: I0216 13:52:48.735688 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4702718-14ef-4b62-acfa-016d0a04a952-config\") pod \"dnsmasq-dns-7cb5889db5-8jm9c\" (UID: \"c4702718-14ef-4b62-acfa-016d0a04a952\") " pod="openstack/dnsmasq-dns-7cb5889db5-8jm9c" Feb 16 13:52:48 crc kubenswrapper[4812]: I0216 13:52:48.735797 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4702718-14ef-4b62-acfa-016d0a04a952-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-8jm9c\" (UID: \"c4702718-14ef-4b62-acfa-016d0a04a952\") " pod="openstack/dnsmasq-dns-7cb5889db5-8jm9c" Feb 16 13:52:48 crc kubenswrapper[4812]: I0216 13:52:48.735853 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkxbb\" (UniqueName: \"kubernetes.io/projected/c4702718-14ef-4b62-acfa-016d0a04a952-kube-api-access-nkxbb\") pod \"dnsmasq-dns-7cb5889db5-8jm9c\" (UID: \"c4702718-14ef-4b62-acfa-016d0a04a952\") " pod="openstack/dnsmasq-dns-7cb5889db5-8jm9c" Feb 16 13:52:48 crc kubenswrapper[4812]: I0216 13:52:48.737521 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4702718-14ef-4b62-acfa-016d0a04a952-config\") pod \"dnsmasq-dns-7cb5889db5-8jm9c\" (UID: \"c4702718-14ef-4b62-acfa-016d0a04a952\") " pod="openstack/dnsmasq-dns-7cb5889db5-8jm9c" Feb 16 13:52:48 crc kubenswrapper[4812]: I0216 13:52:48.737562 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4702718-14ef-4b62-acfa-016d0a04a952-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-8jm9c\" (UID: \"c4702718-14ef-4b62-acfa-016d0a04a952\") " pod="openstack/dnsmasq-dns-7cb5889db5-8jm9c" Feb 16 13:52:48 crc kubenswrapper[4812]: I0216 13:52:48.738299 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-p88ww" event={"ID":"826ded0a-246d-40b7-87d1-22fa8224d506","Type":"ContainerStarted","Data":"f38414624f7000e7041c9ac3432f57ccf0f36630fa6398530a4125ffb3f8a478"} Feb 16 13:52:48 crc kubenswrapper[4812]: I0216 13:52:48.759991 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-p88ww" podStartSLOduration=23.026132972 podStartE2EDuration="29.759967473s" podCreationTimestamp="2026-02-16 13:52:19 +0000 UTC" firstStartedPulling="2026-02-16 13:52:39.682113705 +0000 UTC m=+1248.746444406" lastFinishedPulling="2026-02-16 13:52:46.415948206 +0000 UTC m=+1255.480278907" observedRunningTime="2026-02-16 13:52:48.756869654 +0000 UTC m=+1257.821200355" watchObservedRunningTime="2026-02-16 13:52:48.759967473 +0000 UTC m=+1257.824298174" Feb 16 13:52:48 crc kubenswrapper[4812]: I0216 13:52:48.786636 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkxbb\" (UniqueName: \"kubernetes.io/projected/c4702718-14ef-4b62-acfa-016d0a04a952-kube-api-access-nkxbb\") pod \"dnsmasq-dns-7cb5889db5-8jm9c\" (UID: \"c4702718-14ef-4b62-acfa-016d0a04a952\") " pod="openstack/dnsmasq-dns-7cb5889db5-8jm9c" Feb 16 13:52:48 crc kubenswrapper[4812]: I0216 13:52:48.867073 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-8jm9c" Feb 16 13:52:48 crc kubenswrapper[4812]: I0216 13:52:48.916343 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-5d2mw" Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.042947 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prsk4\" (UniqueName: \"kubernetes.io/projected/5e9d026d-fcd8-49b3-8268-8a9e59f077d0-kube-api-access-prsk4\") pod \"5e9d026d-fcd8-49b3-8268-8a9e59f077d0\" (UID: \"5e9d026d-fcd8-49b3-8268-8a9e59f077d0\") " Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.043115 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e9d026d-fcd8-49b3-8268-8a9e59f077d0-dns-svc\") pod \"5e9d026d-fcd8-49b3-8268-8a9e59f077d0\" (UID: \"5e9d026d-fcd8-49b3-8268-8a9e59f077d0\") " Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.043139 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e9d026d-fcd8-49b3-8268-8a9e59f077d0-config\") pod \"5e9d026d-fcd8-49b3-8268-8a9e59f077d0\" (UID: \"5e9d026d-fcd8-49b3-8268-8a9e59f077d0\") " Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.043902 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e9d026d-fcd8-49b3-8268-8a9e59f077d0-config" (OuterVolumeSpecName: "config") pod "5e9d026d-fcd8-49b3-8268-8a9e59f077d0" (UID: "5e9d026d-fcd8-49b3-8268-8a9e59f077d0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.043932 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e9d026d-fcd8-49b3-8268-8a9e59f077d0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5e9d026d-fcd8-49b3-8268-8a9e59f077d0" (UID: "5e9d026d-fcd8-49b3-8268-8a9e59f077d0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.048288 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e9d026d-fcd8-49b3-8268-8a9e59f077d0-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.048322 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e9d026d-fcd8-49b3-8268-8a9e59f077d0-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.060428 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e9d026d-fcd8-49b3-8268-8a9e59f077d0-kube-api-access-prsk4" (OuterVolumeSpecName: "kube-api-access-prsk4") pod "5e9d026d-fcd8-49b3-8268-8a9e59f077d0" (UID: "5e9d026d-fcd8-49b3-8268-8a9e59f077d0"). InnerVolumeSpecName "kube-api-access-prsk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.151858 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prsk4\" (UniqueName: \"kubernetes.io/projected/5e9d026d-fcd8-49b3-8268-8a9e59f077d0-kube-api-access-prsk4\") on node \"crc\" DevicePath \"\"" Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.440675 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-8jm9c"] Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.651749 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.658842 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.661292 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-hbl8x" Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.661464 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.661504 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.661541 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.690855 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.755865 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hb5vr" event={"ID":"6d8ae81a-a9ec-4f2f-8369-0164c6c1923c","Type":"ContainerStarted","Data":"429eed8a125ef12ff7275c516dede4104aece8ef19deb8b32e940bba27b1ed37"} Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.756904 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hb5vr" Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.769896 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7eae7df6-e3b7-4ac5-bb18-6b781744747d","Type":"ContainerStarted","Data":"1e2d7b9a7839042a5ccfe1857dea15cd8c97bab929d2d1e60ed7b79c31238bbd"} Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.778604 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f34d582-3b55-4d2a-91b3-c64acd57981f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"7f34d582-3b55-4d2a-91b3-c64acd57981f\") " pod="openstack/swift-storage-0" Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.778777 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7f34d582-3b55-4d2a-91b3-c64acd57981f-etc-swift\") pod \"swift-storage-0\" (UID: \"7f34d582-3b55-4d2a-91b3-c64acd57981f\") " pod="openstack/swift-storage-0" Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.778816 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vlh6\" (UniqueName: \"kubernetes.io/projected/7f34d582-3b55-4d2a-91b3-c64acd57981f-kube-api-access-2vlh6\") pod \"swift-storage-0\" (UID: \"7f34d582-3b55-4d2a-91b3-c64acd57981f\") " pod="openstack/swift-storage-0" Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.778850 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7f34d582-3b55-4d2a-91b3-c64acd57981f-cache\") pod \"swift-storage-0\" (UID: \"7f34d582-3b55-4d2a-91b3-c64acd57981f\") " pod="openstack/swift-storage-0" Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.778888 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d7003808-e510-4e94-9623-7ecbdf32fe9b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7003808-e510-4e94-9623-7ecbdf32fe9b\") pod \"swift-storage-0\" (UID: \"7f34d582-3b55-4d2a-91b3-c64acd57981f\") " pod="openstack/swift-storage-0" Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.778919 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7f34d582-3b55-4d2a-91b3-c64acd57981f-lock\") pod \"swift-storage-0\" (UID: \"7f34d582-3b55-4d2a-91b3-c64acd57981f\") " pod="openstack/swift-storage-0" Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.786690 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hb5vr" Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.787156 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6xb2f" event={"ID":"a89a28de-f5cd-413a-bbbc-0f58c3f5fd1f","Type":"ContainerStarted","Data":"5523d93eafc2065b2ce678f786261d576b40d9dab204ebc8f1a462f8e59926cb"} Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.787296 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6xb2f" Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.792013 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6039f662-e9ac-455c-b4da-9bcbe34e1396","Type":"ContainerStarted","Data":"5e066bbc0364950e73123a474b801275b367f9b18d3e19d360b9d8602a4a7f1c"} Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.792969 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.795019 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7dzhm" event={"ID":"2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70","Type":"ContainerStarted","Data":"5a2857f363b74fd5aa4954f9ac4e289078695a54c1df724762d5c43e72c67f97"} Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.795677 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-7dzhm" Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.800301 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hb5vr" podStartSLOduration=24.837237234 podStartE2EDuration="30.800278917s" podCreationTimestamp="2026-02-16 13:52:19 +0000 UTC" firstStartedPulling="2026-02-16 13:52:40.562609663 +0000 UTC m=+1249.626940364" lastFinishedPulling="2026-02-16 13:52:46.525651336 +0000 UTC m=+1255.589982047" observedRunningTime="2026-02-16 13:52:49.784963056 +0000 UTC m=+1258.849293767" watchObservedRunningTime="2026-02-16 13:52:49.800278917 +0000 UTC m=+1258.864609618" Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.811005 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516","Type":"ContainerStarted","Data":"bd4f1e32546c105d4747ab6bbec2e961be82af01a6e1abf47fd6164bd6d1c7dc"} Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.829200 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hjxr5" event={"ID":"619a5cb7-30a8-4ac4-955e-d2c97ce49fda","Type":"ContainerStarted","Data":"21bc043d17040e63fa7de0c155bc5906db73656fd918363a23c60430f2f322b6"} Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.843841 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-7dzhm" podStartSLOduration=28.130999652 podStartE2EDuration="38.843821852s" podCreationTimestamp="2026-02-16 13:52:11 +0000 UTC" firstStartedPulling="2026-02-16 13:52:35.60267322 +0000 UTC m=+1244.667003921" lastFinishedPulling="2026-02-16 13:52:46.31549542 +0000 UTC m=+1255.379826121" observedRunningTime="2026-02-16 13:52:49.835257165 +0000 UTC m=+1258.899587866" watchObservedRunningTime="2026-02-16 13:52:49.843821852 +0000 UTC m=+1258.908152543" Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.850102 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-5d2mw" event={"ID":"5e9d026d-fcd8-49b3-8268-8a9e59f077d0","Type":"ContainerDied","Data":"109dc29adaf3c71cc8d97eb5e633ee083d819e31a8a8fd6f8c3fc15ea7a97657"} Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.850255 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-5d2mw" Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.867488 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=33.986573914 podStartE2EDuration="41.867462793s" podCreationTimestamp="2026-02-16 13:52:08 +0000 UTC" firstStartedPulling="2026-02-16 13:52:40.20270217 +0000 UTC m=+1249.267032871" lastFinishedPulling="2026-02-16 13:52:48.083591049 +0000 UTC m=+1257.147921750" observedRunningTime="2026-02-16 13:52:49.860898264 +0000 UTC m=+1258.925228975" watchObservedRunningTime="2026-02-16 13:52:49.867462793 +0000 UTC m=+1258.931793494" Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.868753 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"51f12264-af08-4cf2-9e76-98dc91b0b7a8","Type":"ContainerStarted","Data":"e6933854f6fe30c101637bc825d6494d418f60b82ba12894d9d89fb1cb82f11f"} Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.869708 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.877490 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-8jm9c" event={"ID":"c4702718-14ef-4b62-acfa-016d0a04a952","Type":"ContainerStarted","Data":"d87335e405646b370b1f784b066f94cff86b34009da250601619e6667f8a8c59"} Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.880727 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7f34d582-3b55-4d2a-91b3-c64acd57981f-etc-swift\") pod \"swift-storage-0\" (UID: \"7f34d582-3b55-4d2a-91b3-c64acd57981f\") " pod="openstack/swift-storage-0" Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.880877 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vlh6\" (UniqueName: \"kubernetes.io/projected/7f34d582-3b55-4d2a-91b3-c64acd57981f-kube-api-access-2vlh6\") pod \"swift-storage-0\" (UID: \"7f34d582-3b55-4d2a-91b3-c64acd57981f\") " pod="openstack/swift-storage-0" Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.880928 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7f34d582-3b55-4d2a-91b3-c64acd57981f-cache\") pod \"swift-storage-0\" (UID: \"7f34d582-3b55-4d2a-91b3-c64acd57981f\") " pod="openstack/swift-storage-0" Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.880992 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d7003808-e510-4e94-9623-7ecbdf32fe9b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7003808-e510-4e94-9623-7ecbdf32fe9b\") pod \"swift-storage-0\" (UID: \"7f34d582-3b55-4d2a-91b3-c64acd57981f\") " pod="openstack/swift-storage-0" Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.881052 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7f34d582-3b55-4d2a-91b3-c64acd57981f-lock\") pod \"swift-storage-0\" (UID: \"7f34d582-3b55-4d2a-91b3-c64acd57981f\") " pod="openstack/swift-storage-0" Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.881091 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f34d582-3b55-4d2a-91b3-c64acd57981f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"7f34d582-3b55-4d2a-91b3-c64acd57981f\") " pod="openstack/swift-storage-0" Feb 16 13:52:49 crc kubenswrapper[4812]: E0216 13:52:49.882165 4812 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 13:52:49 crc kubenswrapper[4812]: E0216 13:52:49.882193 4812 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 13:52:49 crc kubenswrapper[4812]: E0216 13:52:49.882242 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f34d582-3b55-4d2a-91b3-c64acd57981f-etc-swift podName:7f34d582-3b55-4d2a-91b3-c64acd57981f nodeName:}" failed. No retries permitted until 2026-02-16 13:52:50.382221619 +0000 UTC m=+1259.446552320 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7f34d582-3b55-4d2a-91b3-c64acd57981f-etc-swift") pod "swift-storage-0" (UID: "7f34d582-3b55-4d2a-91b3-c64acd57981f") : configmap "swift-ring-files" not found Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.885245 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7f34d582-3b55-4d2a-91b3-c64acd57981f-cache\") pod \"swift-storage-0\" (UID: \"7f34d582-3b55-4d2a-91b3-c64acd57981f\") " pod="openstack/swift-storage-0" Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.888573 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7f34d582-3b55-4d2a-91b3-c64acd57981f-lock\") pod \"swift-storage-0\" (UID: \"7f34d582-3b55-4d2a-91b3-c64acd57981f\") " pod="openstack/swift-storage-0" Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.898511 4812 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.898568 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d7003808-e510-4e94-9623-7ecbdf32fe9b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7003808-e510-4e94-9623-7ecbdf32fe9b\") pod \"swift-storage-0\" (UID: \"7f34d582-3b55-4d2a-91b3-c64acd57981f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b7127ac4886b390419e655803a90a0b0a01f15ac8fdf984d76c35ca3bf656fe3/globalmount\"" pod="openstack/swift-storage-0" Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.905776 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6xb2f" podStartSLOduration=24.638842684 podStartE2EDuration="30.905750617s" podCreationTimestamp="2026-02-16 13:52:19 +0000 UTC" firstStartedPulling="2026-02-16 13:52:40.15101756 +0000 UTC m=+1249.215348261" lastFinishedPulling="2026-02-16 13:52:46.417925493 +0000 UTC m=+1255.482256194" observedRunningTime="2026-02-16 13:52:49.885982557 +0000 UTC m=+1258.950313258" watchObservedRunningTime="2026-02-16 13:52:49.905750617 +0000 UTC m=+1258.970081318" Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.919069 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-ingester-0" podStartSLOduration=24.551733464 podStartE2EDuration="30.91904295s" podCreationTimestamp="2026-02-16 13:52:19 +0000 UTC" firstStartedPulling="2026-02-16 13:52:40.173541559 +0000 UTC m=+1249.237872260" lastFinishedPulling="2026-02-16 13:52:46.540851045 +0000 UTC m=+1255.605181746" observedRunningTime="2026-02-16 13:52:49.914089387 +0000 UTC m=+1258.978420108" watchObservedRunningTime="2026-02-16 13:52:49.91904295 +0000 UTC m=+1258.983373651" Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.927101 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-p88ww" Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.927136 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-j48px" Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.927150 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-j48px" event={"ID":"cef2c2bd-5dea-4bf2-8fcf-a3cadc541023","Type":"ContainerStarted","Data":"cf5d84c0a3662e972a1a3b2cd246f1dce9231693ec12d5e61133e48d7770a57e"} Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.934060 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-j48px" Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.976786 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5d2mw"] Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.981759 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f34d582-3b55-4d2a-91b3-c64acd57981f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"7f34d582-3b55-4d2a-91b3-c64acd57981f\") " pod="openstack/swift-storage-0" Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.982764 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vlh6\" (UniqueName: \"kubernetes.io/projected/7f34d582-3b55-4d2a-91b3-c64acd57981f-kube-api-access-2vlh6\") pod \"swift-storage-0\" (UID: \"7f34d582-3b55-4d2a-91b3-c64acd57981f\") " pod="openstack/swift-storage-0" Feb 16 13:52:49 crc kubenswrapper[4812]: I0216 13:52:49.986523 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5d2mw"] Feb 16 13:52:50 crc kubenswrapper[4812]: I0216 13:52:50.045488 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-j48px" podStartSLOduration=23.819765268 podStartE2EDuration="30.045465593s" podCreationTimestamp="2026-02-16 13:52:20 +0000 UTC" firstStartedPulling="2026-02-16 13:52:40.193168525 +0000 UTC m=+1249.257499226" lastFinishedPulling="2026-02-16 13:52:46.41886885 +0000 UTC m=+1255.483199551" observedRunningTime="2026-02-16 13:52:50.004211315 +0000 UTC m=+1259.068542026" watchObservedRunningTime="2026-02-16 13:52:50.045465593 +0000 UTC m=+1259.109796294" Feb 16 13:52:50 crc kubenswrapper[4812]: I0216 13:52:50.246404 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d7003808-e510-4e94-9623-7ecbdf32fe9b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7003808-e510-4e94-9623-7ecbdf32fe9b\") pod \"swift-storage-0\" (UID: \"7f34d582-3b55-4d2a-91b3-c64acd57981f\") " pod="openstack/swift-storage-0" Feb 16 13:52:50 crc kubenswrapper[4812]: I0216 13:52:50.395308 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7f34d582-3b55-4d2a-91b3-c64acd57981f-etc-swift\") pod \"swift-storage-0\" (UID: \"7f34d582-3b55-4d2a-91b3-c64acd57981f\") " pod="openstack/swift-storage-0" Feb 16 13:52:50 crc kubenswrapper[4812]: E0216 13:52:50.395653 4812 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 13:52:50 crc kubenswrapper[4812]: E0216 13:52:50.395678 4812 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 13:52:50 crc kubenswrapper[4812]: E0216 13:52:50.395728 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f34d582-3b55-4d2a-91b3-c64acd57981f-etc-swift podName:7f34d582-3b55-4d2a-91b3-c64acd57981f nodeName:}" failed. No retries permitted until 2026-02-16 13:52:51.395710518 +0000 UTC m=+1260.460041219 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7f34d582-3b55-4d2a-91b3-c64acd57981f-etc-swift") pod "swift-storage-0" (UID: "7f34d582-3b55-4d2a-91b3-c64acd57981f") : configmap "swift-ring-files" not found Feb 16 13:52:50 crc kubenswrapper[4812]: I0216 13:52:50.909652 4812 generic.go:334] "Generic (PLEG): container finished" podID="c4702718-14ef-4b62-acfa-016d0a04a952" containerID="afd050bd7daa84dc2f123f80a549dbb7ffcaa7742a836165793241a283c950e4" exitCode=0 Feb 16 13:52:50 crc kubenswrapper[4812]: I0216 13:52:50.909746 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-8jm9c" event={"ID":"c4702718-14ef-4b62-acfa-016d0a04a952","Type":"ContainerDied","Data":"afd050bd7daa84dc2f123f80a549dbb7ffcaa7742a836165793241a283c950e4"} Feb 16 13:52:50 crc kubenswrapper[4812]: I0216 13:52:50.916706 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hjxr5" event={"ID":"619a5cb7-30a8-4ac4-955e-d2c97ce49fda","Type":"ContainerStarted","Data":"6af75600f6134e80e7ff292f521bb87e435c2e68f4d96e8497bebb64ebc475b6"} Feb 16 13:52:50 crc kubenswrapper[4812]: I0216 13:52:50.916888 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-hjxr5" Feb 16 13:52:50 crc kubenswrapper[4812]: I0216 13:52:50.916911 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-hjxr5" Feb 16 13:52:50 crc kubenswrapper[4812]: I0216 13:52:50.919364 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"96cb02af-deed-4da5-96cf-28d69592caed","Type":"ContainerStarted","Data":"800f921e1c83753daea77f0b0be9cf51ebc51e6f4f8b82276eab7f955e87c5c2"} Feb 16 13:52:50 crc kubenswrapper[4812]: I0216 13:52:50.967883 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-hjxr5" podStartSLOduration=33.444376712 podStartE2EDuration="38.967865768s" podCreationTimestamp="2026-02-16 13:52:12 +0000 UTC" firstStartedPulling="2026-02-16 13:52:40.568733019 +0000 UTC m=+1249.633063720" lastFinishedPulling="2026-02-16 13:52:46.092222075 +0000 UTC m=+1255.156552776" observedRunningTime="2026-02-16 13:52:50.96654741 +0000 UTC m=+1260.030878141" watchObservedRunningTime="2026-02-16 13:52:50.967865768 +0000 UTC m=+1260.032196469" Feb 16 13:52:51 crc kubenswrapper[4812]: I0216 13:52:51.414847 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7f34d582-3b55-4d2a-91b3-c64acd57981f-etc-swift\") pod \"swift-storage-0\" (UID: \"7f34d582-3b55-4d2a-91b3-c64acd57981f\") " pod="openstack/swift-storage-0" Feb 16 13:52:51 crc kubenswrapper[4812]: E0216 13:52:51.415095 4812 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 13:52:51 crc kubenswrapper[4812]: E0216 13:52:51.415579 4812 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 13:52:51 crc kubenswrapper[4812]: E0216 13:52:51.415651 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f34d582-3b55-4d2a-91b3-c64acd57981f-etc-swift podName:7f34d582-3b55-4d2a-91b3-c64acd57981f nodeName:}" failed. No retries permitted until 2026-02-16 13:52:53.415630004 +0000 UTC m=+1262.479960705 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7f34d582-3b55-4d2a-91b3-c64acd57981f-etc-swift") pod "swift-storage-0" (UID: "7f34d582-3b55-4d2a-91b3-c64acd57981f") : configmap "swift-ring-files" not found Feb 16 13:52:51 crc kubenswrapper[4812]: E0216 13:52:51.464609 4812 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.252:54032->38.129.56.252:44981: write tcp 38.129.56.252:54032->38.129.56.252:44981: write: broken pipe Feb 16 13:52:51 crc kubenswrapper[4812]: I0216 13:52:51.905142 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e9d026d-fcd8-49b3-8268-8a9e59f077d0" path="/var/lib/kubelet/pods/5e9d026d-fcd8-49b3-8268-8a9e59f077d0/volumes" Feb 16 13:52:51 crc kubenswrapper[4812]: I0216 13:52:51.930793 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e02a9868-e12c-4a65-9ba5-4a5965131b5b","Type":"ContainerStarted","Data":"bd870744e6d645686b23ecaf761646cbeb08e898465be552377c3334631d1441"} Feb 16 13:52:51 crc kubenswrapper[4812]: I0216 13:52:51.934031 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516","Type":"ContainerStarted","Data":"9aec7589ae8a984846b89b7658beb6c29d288a5025fd2a90d852c8ec02bca88b"} Feb 16 13:52:51 crc kubenswrapper[4812]: I0216 13:52:51.936861 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7eae7df6-e3b7-4ac5-bb18-6b781744747d","Type":"ContainerStarted","Data":"14b35180436933e78bfdd0a684859d00c45bac634707d9177b438a758f4707bf"} Feb 16 13:52:51 crc kubenswrapper[4812]: I0216 13:52:51.939027 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-8jm9c" event={"ID":"c4702718-14ef-4b62-acfa-016d0a04a952","Type":"ContainerStarted","Data":"0d5e1551841bde7c5c9374e4432e708037375fd2a5abbd5678c57ffb1c31efaa"} Feb 16 13:52:51 crc kubenswrapper[4812]: I0216 13:52:51.984246 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cb5889db5-8jm9c" podStartSLOduration=3.54526238 podStartE2EDuration="3.984226902s" podCreationTimestamp="2026-02-16 13:52:48 +0000 UTC" firstStartedPulling="2026-02-16 13:52:49.492229658 +0000 UTC m=+1258.556560359" lastFinishedPulling="2026-02-16 13:52:49.93119418 +0000 UTC m=+1258.995524881" observedRunningTime="2026-02-16 13:52:51.978567649 +0000 UTC m=+1261.042898350" watchObservedRunningTime="2026-02-16 13:52:51.984226902 +0000 UTC m=+1261.048557603" Feb 16 13:52:52 crc kubenswrapper[4812]: I0216 13:52:52.009001 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=30.786468737 podStartE2EDuration="41.008982755s" podCreationTimestamp="2026-02-16 13:52:11 +0000 UTC" firstStartedPulling="2026-02-16 13:52:41.165770817 +0000 UTC m=+1250.230101518" lastFinishedPulling="2026-02-16 13:52:51.388284835 +0000 UTC m=+1260.452615536" observedRunningTime="2026-02-16 13:52:52.007653607 +0000 UTC m=+1261.071984308" watchObservedRunningTime="2026-02-16 13:52:52.008982755 +0000 UTC m=+1261.073313456" Feb 16 13:52:52 crc kubenswrapper[4812]: I0216 13:52:52.250617 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 16 13:52:52 crc kubenswrapper[4812]: I0216 13:52:52.307893 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=23.208201516 podStartE2EDuration="37.30786799s" podCreationTimestamp="2026-02-16 13:52:15 +0000 UTC" firstStartedPulling="2026-02-16 13:52:37.291875635 +0000 UTC m=+1246.356206336" lastFinishedPulling="2026-02-16 13:52:51.391542109 +0000 UTC m=+1260.455872810" observedRunningTime="2026-02-16 13:52:52.292671322 +0000 UTC m=+1261.357002023" watchObservedRunningTime="2026-02-16 13:52:52.30786799 +0000 UTC m=+1261.372198711" Feb 16 13:52:52 crc kubenswrapper[4812]: I0216 13:52:52.339675 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 16 13:52:53 crc kubenswrapper[4812]: I0216 13:52:53.416012 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 16 13:52:53 crc kubenswrapper[4812]: I0216 13:52:53.417699 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 16 13:52:53 crc kubenswrapper[4812]: I0216 13:52:53.463796 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cb5889db5-8jm9c" Feb 16 13:52:53 crc kubenswrapper[4812]: I0216 13:52:53.483379 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 16 13:52:53 crc kubenswrapper[4812]: I0216 13:52:53.516509 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7f34d582-3b55-4d2a-91b3-c64acd57981f-etc-swift\") pod \"swift-storage-0\" (UID: \"7f34d582-3b55-4d2a-91b3-c64acd57981f\") " pod="openstack/swift-storage-0" Feb 16 13:52:53 crc kubenswrapper[4812]: E0216 13:52:53.516773 4812 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 13:52:53 crc kubenswrapper[4812]: E0216 13:52:53.516793 4812 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 13:52:53 crc kubenswrapper[4812]: E0216 13:52:53.516838 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f34d582-3b55-4d2a-91b3-c64acd57981f-etc-swift podName:7f34d582-3b55-4d2a-91b3-c64acd57981f nodeName:}" failed. No retries permitted until 2026-02-16 13:52:57.516822854 +0000 UTC m=+1266.581153555 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7f34d582-3b55-4d2a-91b3-c64acd57981f-etc-swift") pod "swift-storage-0" (UID: "7f34d582-3b55-4d2a-91b3-c64acd57981f") : configmap "swift-ring-files" not found Feb 16 13:52:53 crc kubenswrapper[4812]: I0216 13:52:53.604675 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-dkwvj"] Feb 16 13:52:53 crc kubenswrapper[4812]: I0216 13:52:53.606111 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dkwvj" Feb 16 13:52:53 crc kubenswrapper[4812]: I0216 13:52:53.608765 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 16 13:52:53 crc kubenswrapper[4812]: I0216 13:52:53.609085 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 16 13:52:53 crc kubenswrapper[4812]: I0216 13:52:53.613313 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 16 13:52:53 crc kubenswrapper[4812]: I0216 13:52:53.652955 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-dkwvj"] Feb 16 13:52:53 crc kubenswrapper[4812]: E0216 13:52:53.708980 4812 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 16 13:52:53 crc kubenswrapper[4812]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3/volume-subpaths/dns-svc/init/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 16 13:52:53 crc kubenswrapper[4812]: > podSandboxID="462f8eb9a9230667b134c4446c906ab371db8b60cecce0b15e862a5b9b9556d0" Feb 16 13:52:53 crc kubenswrapper[4812]: E0216 13:52:53.709136 4812 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 16 13:52:53 crc kubenswrapper[4812]: init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-chgwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-gl6hc_openstack(1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3/volume-subpaths/dns-svc/init/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 16 13:52:53 crc kubenswrapper[4812]: > logger="UnhandledError" Feb 16 13:52:53 crc kubenswrapper[4812]: E0216 13:52:53.710355 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3/volume-subpaths/dns-svc/init/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-57d769cc4f-gl6hc" podUID="1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3" Feb 16 13:52:53 crc kubenswrapper[4812]: I0216 13:52:53.722990 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3e7d63b8-7d3a-4169-b939-2ea11895b53a-swiftconf\") pod \"swift-ring-rebalance-dkwvj\" (UID: \"3e7d63b8-7d3a-4169-b939-2ea11895b53a\") " pod="openstack/swift-ring-rebalance-dkwvj" Feb 16 13:52:53 crc kubenswrapper[4812]: I0216 13:52:53.723349 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3e7d63b8-7d3a-4169-b939-2ea11895b53a-ring-data-devices\") pod \"swift-ring-rebalance-dkwvj\" (UID: \"3e7d63b8-7d3a-4169-b939-2ea11895b53a\") " pod="openstack/swift-ring-rebalance-dkwvj" Feb 16 13:52:53 crc kubenswrapper[4812]: I0216 13:52:53.723758 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e7d63b8-7d3a-4169-b939-2ea11895b53a-scripts\") pod \"swift-ring-rebalance-dkwvj\" (UID: \"3e7d63b8-7d3a-4169-b939-2ea11895b53a\") " pod="openstack/swift-ring-rebalance-dkwvj" Feb 16 13:52:53 crc kubenswrapper[4812]: I0216 13:52:53.723881 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3e7d63b8-7d3a-4169-b939-2ea11895b53a-dispersionconf\") pod \"swift-ring-rebalance-dkwvj\" (UID: \"3e7d63b8-7d3a-4169-b939-2ea11895b53a\") " pod="openstack/swift-ring-rebalance-dkwvj" Feb 16 13:52:53 crc kubenswrapper[4812]: I0216 13:52:53.723971 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9knxd\" (UniqueName: \"kubernetes.io/projected/3e7d63b8-7d3a-4169-b939-2ea11895b53a-kube-api-access-9knxd\") pod \"swift-ring-rebalance-dkwvj\" (UID: \"3e7d63b8-7d3a-4169-b939-2ea11895b53a\") " pod="openstack/swift-ring-rebalance-dkwvj" Feb 16 13:52:53 crc kubenswrapper[4812]: I0216 13:52:53.724042 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3e7d63b8-7d3a-4169-b939-2ea11895b53a-etc-swift\") pod \"swift-ring-rebalance-dkwvj\" (UID: \"3e7d63b8-7d3a-4169-b939-2ea11895b53a\") " pod="openstack/swift-ring-rebalance-dkwvj" Feb 16 13:52:53 crc kubenswrapper[4812]: I0216 13:52:53.724146 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e7d63b8-7d3a-4169-b939-2ea11895b53a-combined-ca-bundle\") pod \"swift-ring-rebalance-dkwvj\" (UID: \"3e7d63b8-7d3a-4169-b939-2ea11895b53a\") " pod="openstack/swift-ring-rebalance-dkwvj" Feb 16 13:52:53 crc kubenswrapper[4812]: I0216 13:52:53.826357 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3e7d63b8-7d3a-4169-b939-2ea11895b53a-etc-swift\") pod \"swift-ring-rebalance-dkwvj\" (UID: \"3e7d63b8-7d3a-4169-b939-2ea11895b53a\") " pod="openstack/swift-ring-rebalance-dkwvj" Feb 16 13:52:53 crc kubenswrapper[4812]: I0216 13:52:53.826465 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e7d63b8-7d3a-4169-b939-2ea11895b53a-combined-ca-bundle\") pod \"swift-ring-rebalance-dkwvj\" (UID: \"3e7d63b8-7d3a-4169-b939-2ea11895b53a\") " pod="openstack/swift-ring-rebalance-dkwvj" Feb 16 13:52:53 crc kubenswrapper[4812]: I0216 13:52:53.826521 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3e7d63b8-7d3a-4169-b939-2ea11895b53a-swiftconf\") pod \"swift-ring-rebalance-dkwvj\" (UID: \"3e7d63b8-7d3a-4169-b939-2ea11895b53a\") " pod="openstack/swift-ring-rebalance-dkwvj" Feb 16 13:52:53 crc kubenswrapper[4812]: I0216 13:52:53.826600 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3e7d63b8-7d3a-4169-b939-2ea11895b53a-ring-data-devices\") pod \"swift-ring-rebalance-dkwvj\" (UID: \"3e7d63b8-7d3a-4169-b939-2ea11895b53a\") " pod="openstack/swift-ring-rebalance-dkwvj" Feb 16 13:52:53 crc kubenswrapper[4812]: I0216 13:52:53.826669 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e7d63b8-7d3a-4169-b939-2ea11895b53a-scripts\") pod \"swift-ring-rebalance-dkwvj\" (UID: \"3e7d63b8-7d3a-4169-b939-2ea11895b53a\") " pod="openstack/swift-ring-rebalance-dkwvj" Feb 16 13:52:53 crc kubenswrapper[4812]: I0216 13:52:53.826715 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3e7d63b8-7d3a-4169-b939-2ea11895b53a-dispersionconf\") pod \"swift-ring-rebalance-dkwvj\" (UID: \"3e7d63b8-7d3a-4169-b939-2ea11895b53a\") " pod="openstack/swift-ring-rebalance-dkwvj" Feb 16 13:52:53 crc kubenswrapper[4812]: I0216 13:52:53.826766 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9knxd\" (UniqueName: \"kubernetes.io/projected/3e7d63b8-7d3a-4169-b939-2ea11895b53a-kube-api-access-9knxd\") pod \"swift-ring-rebalance-dkwvj\" (UID: \"3e7d63b8-7d3a-4169-b939-2ea11895b53a\") " pod="openstack/swift-ring-rebalance-dkwvj" Feb 16 13:52:53 crc kubenswrapper[4812]: I0216 13:52:53.826877 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3e7d63b8-7d3a-4169-b939-2ea11895b53a-etc-swift\") pod \"swift-ring-rebalance-dkwvj\" (UID: \"3e7d63b8-7d3a-4169-b939-2ea11895b53a\") " pod="openstack/swift-ring-rebalance-dkwvj" Feb 16 13:52:53 crc kubenswrapper[4812]: I0216 13:52:53.827504 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3e7d63b8-7d3a-4169-b939-2ea11895b53a-ring-data-devices\") pod \"swift-ring-rebalance-dkwvj\" (UID: \"3e7d63b8-7d3a-4169-b939-2ea11895b53a\") " pod="openstack/swift-ring-rebalance-dkwvj" Feb 16 13:52:53 crc kubenswrapper[4812]: I0216 13:52:53.828194 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e7d63b8-7d3a-4169-b939-2ea11895b53a-scripts\") pod \"swift-ring-rebalance-dkwvj\" (UID: \"3e7d63b8-7d3a-4169-b939-2ea11895b53a\") " pod="openstack/swift-ring-rebalance-dkwvj" Feb 16 13:52:53 crc kubenswrapper[4812]: I0216 13:52:53.832357 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3e7d63b8-7d3a-4169-b939-2ea11895b53a-dispersionconf\") pod \"swift-ring-rebalance-dkwvj\" (UID: \"3e7d63b8-7d3a-4169-b939-2ea11895b53a\") " pod="openstack/swift-ring-rebalance-dkwvj" Feb 16 13:52:53 crc kubenswrapper[4812]: I0216 13:52:53.844102 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9knxd\" (UniqueName: \"kubernetes.io/projected/3e7d63b8-7d3a-4169-b939-2ea11895b53a-kube-api-access-9knxd\") pod \"swift-ring-rebalance-dkwvj\" (UID: \"3e7d63b8-7d3a-4169-b939-2ea11895b53a\") " pod="openstack/swift-ring-rebalance-dkwvj" Feb 16 13:52:53 crc kubenswrapper[4812]: I0216 13:52:53.844427 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3e7d63b8-7d3a-4169-b939-2ea11895b53a-swiftconf\") pod \"swift-ring-rebalance-dkwvj\" (UID: \"3e7d63b8-7d3a-4169-b939-2ea11895b53a\") " pod="openstack/swift-ring-rebalance-dkwvj" Feb 16 13:52:53 crc kubenswrapper[4812]: I0216 13:52:53.856359 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e7d63b8-7d3a-4169-b939-2ea11895b53a-combined-ca-bundle\") pod \"swift-ring-rebalance-dkwvj\" (UID: \"3e7d63b8-7d3a-4169-b939-2ea11895b53a\") " pod="openstack/swift-ring-rebalance-dkwvj" Feb 16 13:52:53 crc kubenswrapper[4812]: I0216 13:52:53.960531 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-hbl8x" Feb 16 13:52:53 crc kubenswrapper[4812]: I0216 13:52:53.968229 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dkwvj" Feb 16 13:52:54 crc kubenswrapper[4812]: W0216 13:52:54.426069 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e7d63b8_7d3a_4169_b939_2ea11895b53a.slice/crio-86fe0ab49e4c3456dbdc41888418520e84729d09ab738018ff4fd202c19acbc5 WatchSource:0}: Error finding container 86fe0ab49e4c3456dbdc41888418520e84729d09ab738018ff4fd202c19acbc5: Status 404 returned error can't find the container with id 86fe0ab49e4c3456dbdc41888418520e84729d09ab738018ff4fd202c19acbc5 Feb 16 13:52:54 crc kubenswrapper[4812]: I0216 13:52:54.427525 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-dkwvj"] Feb 16 13:52:54 crc kubenswrapper[4812]: I0216 13:52:54.475831 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dkwvj" event={"ID":"3e7d63b8-7d3a-4169-b939-2ea11895b53a","Type":"ContainerStarted","Data":"86fe0ab49e4c3456dbdc41888418520e84729d09ab738018ff4fd202c19acbc5"} Feb 16 13:52:54 crc kubenswrapper[4812]: I0216 13:52:54.477127 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 16 13:52:54 crc kubenswrapper[4812]: I0216 13:52:54.529651 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 16 13:52:54 crc kubenswrapper[4812]: I0216 13:52:54.539759 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 16 13:52:54 crc kubenswrapper[4812]: I0216 13:52:54.853114 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 16 13:52:54 crc kubenswrapper[4812]: I0216 13:52:54.853151 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 16 13:52:54 crc kubenswrapper[4812]: I0216 13:52:54.994375 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.201749 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gl6hc"] Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.254418 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-gk6gl"] Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.258835 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-gk6gl" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.263753 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.336604 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-gk6gl"] Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.380285 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7nq9\" (UniqueName: \"kubernetes.io/projected/1630a179-1d00-4556-a867-72b31cd916fe-kube-api-access-j7nq9\") pod \"dnsmasq-dns-6c89d5d749-gk6gl\" (UID: \"1630a179-1d00-4556-a867-72b31cd916fe\") " pod="openstack/dnsmasq-dns-6c89d5d749-gk6gl" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.380412 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1630a179-1d00-4556-a867-72b31cd916fe-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-gk6gl\" (UID: \"1630a179-1d00-4556-a867-72b31cd916fe\") " pod="openstack/dnsmasq-dns-6c89d5d749-gk6gl" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.380478 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1630a179-1d00-4556-a867-72b31cd916fe-config\") pod \"dnsmasq-dns-6c89d5d749-gk6gl\" (UID: \"1630a179-1d00-4556-a867-72b31cd916fe\") " pod="openstack/dnsmasq-dns-6c89d5d749-gk6gl" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.380522 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1630a179-1d00-4556-a867-72b31cd916fe-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-gk6gl\" (UID: \"1630a179-1d00-4556-a867-72b31cd916fe\") " pod="openstack/dnsmasq-dns-6c89d5d749-gk6gl" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.438924 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-tcrnd"] Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.441006 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-tcrnd" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.446840 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.479201 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-tcrnd"] Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.483055 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7nq9\" (UniqueName: \"kubernetes.io/projected/1630a179-1d00-4556-a867-72b31cd916fe-kube-api-access-j7nq9\") pod \"dnsmasq-dns-6c89d5d749-gk6gl\" (UID: \"1630a179-1d00-4556-a867-72b31cd916fe\") " pod="openstack/dnsmasq-dns-6c89d5d749-gk6gl" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.483149 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1630a179-1d00-4556-a867-72b31cd916fe-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-gk6gl\" (UID: \"1630a179-1d00-4556-a867-72b31cd916fe\") " pod="openstack/dnsmasq-dns-6c89d5d749-gk6gl" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.483175 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1630a179-1d00-4556-a867-72b31cd916fe-config\") pod \"dnsmasq-dns-6c89d5d749-gk6gl\" (UID: \"1630a179-1d00-4556-a867-72b31cd916fe\") " pod="openstack/dnsmasq-dns-6c89d5d749-gk6gl" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.483203 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1630a179-1d00-4556-a867-72b31cd916fe-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-gk6gl\" (UID: \"1630a179-1d00-4556-a867-72b31cd916fe\") " pod="openstack/dnsmasq-dns-6c89d5d749-gk6gl" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.490678 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1630a179-1d00-4556-a867-72b31cd916fe-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-gk6gl\" (UID: \"1630a179-1d00-4556-a867-72b31cd916fe\") " pod="openstack/dnsmasq-dns-6c89d5d749-gk6gl" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.495199 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1630a179-1d00-4556-a867-72b31cd916fe-config\") pod \"dnsmasq-dns-6c89d5d749-gk6gl\" (UID: \"1630a179-1d00-4556-a867-72b31cd916fe\") " pod="openstack/dnsmasq-dns-6c89d5d749-gk6gl" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.495350 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1630a179-1d00-4556-a867-72b31cd916fe-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-gk6gl\" (UID: \"1630a179-1d00-4556-a867-72b31cd916fe\") " pod="openstack/dnsmasq-dns-6c89d5d749-gk6gl" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.513245 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.524814 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.534856 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.535379 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.535529 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-cb96h" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.535589 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.567226 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7nq9\" (UniqueName: \"kubernetes.io/projected/1630a179-1d00-4556-a867-72b31cd916fe-kube-api-access-j7nq9\") pod \"dnsmasq-dns-6c89d5d749-gk6gl\" (UID: \"1630a179-1d00-4556-a867-72b31cd916fe\") " pod="openstack/dnsmasq-dns-6c89d5d749-gk6gl" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.595189 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-gk6gl" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.596130 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c-scripts\") pod \"ovn-northd-0\" (UID: \"8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c\") " pod="openstack/ovn-northd-0" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.596240 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c\") " pod="openstack/ovn-northd-0" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.596412 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb5g4\" (UniqueName: \"kubernetes.io/projected/8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c-kube-api-access-lb5g4\") pod \"ovn-northd-0\" (UID: \"8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c\") " pod="openstack/ovn-northd-0" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.596452 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c\") " pod="openstack/ovn-northd-0" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.596518 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c-config\") pod \"ovn-northd-0\" (UID: \"8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c\") " pod="openstack/ovn-northd-0" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.596546 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c\") " pod="openstack/ovn-northd-0" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.596563 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c\") " pod="openstack/ovn-northd-0" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.622642 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.640670 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-8jm9c"] Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.641044 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cb5889db5-8jm9c" podUID="c4702718-14ef-4b62-acfa-016d0a04a952" containerName="dnsmasq-dns" containerID="cri-o://0d5e1551841bde7c5c9374e4432e708037375fd2a5abbd5678c57ffb1c31efaa" gracePeriod=10 Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.733564 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-jbrfm"] Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.738025 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c-config\") pod \"ovn-northd-0\" (UID: \"8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c\") " pod="openstack/ovn-northd-0" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.738096 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c\") " pod="openstack/ovn-northd-0" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.738129 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c\") " pod="openstack/ovn-northd-0" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.738172 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzk5g\" (UniqueName: \"kubernetes.io/projected/56d45d6a-4e06-471e-bdc8-60d60af85545-kube-api-access-hzk5g\") pod \"ovn-controller-metrics-tcrnd\" (UID: \"56d45d6a-4e06-471e-bdc8-60d60af85545\") " pod="openstack/ovn-controller-metrics-tcrnd" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.738271 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c-scripts\") pod \"ovn-northd-0\" (UID: \"8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c\") " pod="openstack/ovn-northd-0" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.738332 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56d45d6a-4e06-471e-bdc8-60d60af85545-config\") pod \"ovn-controller-metrics-tcrnd\" (UID: \"56d45d6a-4e06-471e-bdc8-60d60af85545\") " pod="openstack/ovn-controller-metrics-tcrnd" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.738391 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c\") " pod="openstack/ovn-northd-0" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.738566 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/56d45d6a-4e06-471e-bdc8-60d60af85545-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-tcrnd\" (UID: \"56d45d6a-4e06-471e-bdc8-60d60af85545\") " pod="openstack/ovn-controller-metrics-tcrnd" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.738704 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56d45d6a-4e06-471e-bdc8-60d60af85545-combined-ca-bundle\") pod \"ovn-controller-metrics-tcrnd\" (UID: \"56d45d6a-4e06-471e-bdc8-60d60af85545\") " pod="openstack/ovn-controller-metrics-tcrnd" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.738747 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/56d45d6a-4e06-471e-bdc8-60d60af85545-ovs-rundir\") pod \"ovn-controller-metrics-tcrnd\" (UID: \"56d45d6a-4e06-471e-bdc8-60d60af85545\") " pod="openstack/ovn-controller-metrics-tcrnd" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.739106 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/56d45d6a-4e06-471e-bdc8-60d60af85545-ovn-rundir\") pod \"ovn-controller-metrics-tcrnd\" (UID: \"56d45d6a-4e06-471e-bdc8-60d60af85545\") " pod="openstack/ovn-controller-metrics-tcrnd" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.739577 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lb5g4\" (UniqueName: \"kubernetes.io/projected/8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c-kube-api-access-lb5g4\") pod \"ovn-northd-0\" (UID: \"8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c\") " pod="openstack/ovn-northd-0" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.739621 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c\") " pod="openstack/ovn-northd-0" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.768269 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c\") " pod="openstack/ovn-northd-0" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.784629 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c\") " pod="openstack/ovn-northd-0" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.791588 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c-config\") pod \"ovn-northd-0\" (UID: \"8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c\") " pod="openstack/ovn-northd-0" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.797439 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c-scripts\") pod \"ovn-northd-0\" (UID: \"8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c\") " pod="openstack/ovn-northd-0" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.799919 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c\") " pod="openstack/ovn-northd-0" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.822137 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lb5g4\" (UniqueName: \"kubernetes.io/projected/8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c-kube-api-access-lb5g4\") pod \"ovn-northd-0\" (UID: \"8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c\") " pod="openstack/ovn-northd-0" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.824146 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c\") " pod="openstack/ovn-northd-0" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.845232 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/56d45d6a-4e06-471e-bdc8-60d60af85545-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-tcrnd\" (UID: \"56d45d6a-4e06-471e-bdc8-60d60af85545\") " pod="openstack/ovn-controller-metrics-tcrnd" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.845347 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56d45d6a-4e06-471e-bdc8-60d60af85545-combined-ca-bundle\") pod \"ovn-controller-metrics-tcrnd\" (UID: \"56d45d6a-4e06-471e-bdc8-60d60af85545\") " pod="openstack/ovn-controller-metrics-tcrnd" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.845396 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/56d45d6a-4e06-471e-bdc8-60d60af85545-ovs-rundir\") pod \"ovn-controller-metrics-tcrnd\" (UID: \"56d45d6a-4e06-471e-bdc8-60d60af85545\") " pod="openstack/ovn-controller-metrics-tcrnd" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.845556 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/56d45d6a-4e06-471e-bdc8-60d60af85545-ovn-rundir\") pod \"ovn-controller-metrics-tcrnd\" (UID: \"56d45d6a-4e06-471e-bdc8-60d60af85545\") " pod="openstack/ovn-controller-metrics-tcrnd" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.845902 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/56d45d6a-4e06-471e-bdc8-60d60af85545-ovs-rundir\") pod \"ovn-controller-metrics-tcrnd\" (UID: \"56d45d6a-4e06-471e-bdc8-60d60af85545\") " pod="openstack/ovn-controller-metrics-tcrnd" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.846051 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/56d45d6a-4e06-471e-bdc8-60d60af85545-ovn-rundir\") pod \"ovn-controller-metrics-tcrnd\" (UID: \"56d45d6a-4e06-471e-bdc8-60d60af85545\") " pod="openstack/ovn-controller-metrics-tcrnd" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.846179 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzk5g\" (UniqueName: \"kubernetes.io/projected/56d45d6a-4e06-471e-bdc8-60d60af85545-kube-api-access-hzk5g\") pod \"ovn-controller-metrics-tcrnd\" (UID: \"56d45d6a-4e06-471e-bdc8-60d60af85545\") " pod="openstack/ovn-controller-metrics-tcrnd" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.846288 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56d45d6a-4e06-471e-bdc8-60d60af85545-config\") pod \"ovn-controller-metrics-tcrnd\" (UID: \"56d45d6a-4e06-471e-bdc8-60d60af85545\") " pod="openstack/ovn-controller-metrics-tcrnd" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.847223 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56d45d6a-4e06-471e-bdc8-60d60af85545-config\") pod \"ovn-controller-metrics-tcrnd\" (UID: \"56d45d6a-4e06-471e-bdc8-60d60af85545\") " pod="openstack/ovn-controller-metrics-tcrnd" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.851284 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56d45d6a-4e06-471e-bdc8-60d60af85545-combined-ca-bundle\") pod \"ovn-controller-metrics-tcrnd\" (UID: \"56d45d6a-4e06-471e-bdc8-60d60af85545\") " pod="openstack/ovn-controller-metrics-tcrnd" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.914830 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-jbrfm"] Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.919468 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/56d45d6a-4e06-471e-bdc8-60d60af85545-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-tcrnd\" (UID: \"56d45d6a-4e06-471e-bdc8-60d60af85545\") " pod="openstack/ovn-controller-metrics-tcrnd" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.920059 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.925731 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzk5g\" (UniqueName: \"kubernetes.io/projected/56d45d6a-4e06-471e-bdc8-60d60af85545-kube-api-access-hzk5g\") pod \"ovn-controller-metrics-tcrnd\" (UID: \"56d45d6a-4e06-471e-bdc8-60d60af85545\") " pod="openstack/ovn-controller-metrics-tcrnd" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.926039 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-jbrfm" Feb 16 13:52:55 crc kubenswrapper[4812]: I0216 13:52:55.939667 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 16 13:52:57 crc kubenswrapper[4812]: I0216 13:52:56.028005 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 16 13:52:57 crc kubenswrapper[4812]: I0216 13:52:56.655415 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1eb07864-3ace-404d-b092-271e2a57e677-config\") pod \"dnsmasq-dns-698758b865-jbrfm\" (UID: \"1eb07864-3ace-404d-b092-271e2a57e677\") " pod="openstack/dnsmasq-dns-698758b865-jbrfm" Feb 16 13:52:57 crc kubenswrapper[4812]: I0216 13:52:56.655607 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1eb07864-3ace-404d-b092-271e2a57e677-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-jbrfm\" (UID: \"1eb07864-3ace-404d-b092-271e2a57e677\") " pod="openstack/dnsmasq-dns-698758b865-jbrfm" Feb 16 13:52:57 crc kubenswrapper[4812]: I0216 13:52:56.655725 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbnwd\" (UniqueName: \"kubernetes.io/projected/1eb07864-3ace-404d-b092-271e2a57e677-kube-api-access-fbnwd\") pod \"dnsmasq-dns-698758b865-jbrfm\" (UID: \"1eb07864-3ace-404d-b092-271e2a57e677\") " pod="openstack/dnsmasq-dns-698758b865-jbrfm" Feb 16 13:52:57 crc kubenswrapper[4812]: I0216 13:52:56.655824 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1eb07864-3ace-404d-b092-271e2a57e677-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-jbrfm\" (UID: \"1eb07864-3ace-404d-b092-271e2a57e677\") " pod="openstack/dnsmasq-dns-698758b865-jbrfm" Feb 16 13:52:57 crc kubenswrapper[4812]: I0216 13:52:56.655951 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1eb07864-3ace-404d-b092-271e2a57e677-dns-svc\") pod \"dnsmasq-dns-698758b865-jbrfm\" (UID: \"1eb07864-3ace-404d-b092-271e2a57e677\") " pod="openstack/dnsmasq-dns-698758b865-jbrfm" Feb 16 13:52:57 crc kubenswrapper[4812]: I0216 13:52:56.684885 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 16 13:52:57 crc kubenswrapper[4812]: I0216 13:52:56.684946 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 16 13:52:57 crc kubenswrapper[4812]: I0216 13:52:57.311023 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-tcrnd" Feb 16 13:52:57 crc kubenswrapper[4812]: I0216 13:52:57.320626 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1eb07864-3ace-404d-b092-271e2a57e677-config\") pod \"dnsmasq-dns-698758b865-jbrfm\" (UID: \"1eb07864-3ace-404d-b092-271e2a57e677\") " pod="openstack/dnsmasq-dns-698758b865-jbrfm" Feb 16 13:52:57 crc kubenswrapper[4812]: I0216 13:52:57.320783 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1eb07864-3ace-404d-b092-271e2a57e677-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-jbrfm\" (UID: \"1eb07864-3ace-404d-b092-271e2a57e677\") " pod="openstack/dnsmasq-dns-698758b865-jbrfm" Feb 16 13:52:57 crc kubenswrapper[4812]: I0216 13:52:57.320912 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbnwd\" (UniqueName: \"kubernetes.io/projected/1eb07864-3ace-404d-b092-271e2a57e677-kube-api-access-fbnwd\") pod \"dnsmasq-dns-698758b865-jbrfm\" (UID: \"1eb07864-3ace-404d-b092-271e2a57e677\") " pod="openstack/dnsmasq-dns-698758b865-jbrfm" Feb 16 13:52:57 crc kubenswrapper[4812]: I0216 13:52:57.321017 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1eb07864-3ace-404d-b092-271e2a57e677-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-jbrfm\" (UID: \"1eb07864-3ace-404d-b092-271e2a57e677\") " pod="openstack/dnsmasq-dns-698758b865-jbrfm" Feb 16 13:52:57 crc kubenswrapper[4812]: I0216 13:52:57.321148 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1eb07864-3ace-404d-b092-271e2a57e677-dns-svc\") pod \"dnsmasq-dns-698758b865-jbrfm\" (UID: \"1eb07864-3ace-404d-b092-271e2a57e677\") " pod="openstack/dnsmasq-dns-698758b865-jbrfm" Feb 16 13:52:57 crc kubenswrapper[4812]: I0216 13:52:57.325164 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1eb07864-3ace-404d-b092-271e2a57e677-dns-svc\") pod \"dnsmasq-dns-698758b865-jbrfm\" (UID: \"1eb07864-3ace-404d-b092-271e2a57e677\") " pod="openstack/dnsmasq-dns-698758b865-jbrfm" Feb 16 13:52:57 crc kubenswrapper[4812]: I0216 13:52:57.325405 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1eb07864-3ace-404d-b092-271e2a57e677-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-jbrfm\" (UID: \"1eb07864-3ace-404d-b092-271e2a57e677\") " pod="openstack/dnsmasq-dns-698758b865-jbrfm" Feb 16 13:52:57 crc kubenswrapper[4812]: I0216 13:52:57.325825 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1eb07864-3ace-404d-b092-271e2a57e677-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-jbrfm\" (UID: \"1eb07864-3ace-404d-b092-271e2a57e677\") " pod="openstack/dnsmasq-dns-698758b865-jbrfm" Feb 16 13:52:58 crc kubenswrapper[4812]: I0216 13:52:58.092699 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7f34d582-3b55-4d2a-91b3-c64acd57981f-etc-swift\") pod \"swift-storage-0\" (UID: \"7f34d582-3b55-4d2a-91b3-c64acd57981f\") " pod="openstack/swift-storage-0" Feb 16 13:52:58 crc kubenswrapper[4812]: E0216 13:52:58.093367 4812 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 13:52:58 crc kubenswrapper[4812]: E0216 13:52:58.093384 4812 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 13:52:58 crc kubenswrapper[4812]: E0216 13:52:58.093463 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f34d582-3b55-4d2a-91b3-c64acd57981f-etc-swift podName:7f34d582-3b55-4d2a-91b3-c64acd57981f nodeName:}" failed. No retries permitted until 2026-02-16 13:53:06.093424468 +0000 UTC m=+1275.157755169 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7f34d582-3b55-4d2a-91b3-c64acd57981f-etc-swift") pod "swift-storage-0" (UID: "7f34d582-3b55-4d2a-91b3-c64acd57981f") : configmap "swift-ring-files" not found Feb 16 13:52:58 crc kubenswrapper[4812]: I0216 13:52:58.099624 4812 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.435403349s: [/var/lib/containers/storage/overlay/07022e6418f88c6a69485011bfe43cb39be803ecb95f59c3b760d87a0841a3d1/diff /var/log/pods/openshift-etcd_etcd-crc_2139d3e2895fc6797b9c76a1b4c9886d/etcd/0.log]; will not log again for this container unless duration exceeds 2s Feb 16 13:52:58 crc kubenswrapper[4812]: I0216 13:52:58.163662 4812 generic.go:334] "Generic (PLEG): container finished" podID="c4702718-14ef-4b62-acfa-016d0a04a952" containerID="0d5e1551841bde7c5c9374e4432e708037375fd2a5abbd5678c57ffb1c31efaa" exitCode=0 Feb 16 13:52:58 crc kubenswrapper[4812]: I0216 13:52:58.163802 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-8jm9c" event={"ID":"c4702718-14ef-4b62-acfa-016d0a04a952","Type":"ContainerDied","Data":"0d5e1551841bde7c5c9374e4432e708037375fd2a5abbd5678c57ffb1c31efaa"} Feb 16 13:52:58 crc kubenswrapper[4812]: I0216 13:52:58.170170 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1eb07864-3ace-404d-b092-271e2a57e677-config\") pod \"dnsmasq-dns-698758b865-jbrfm\" (UID: \"1eb07864-3ace-404d-b092-271e2a57e677\") " pod="openstack/dnsmasq-dns-698758b865-jbrfm" Feb 16 13:52:58 crc kubenswrapper[4812]: I0216 13:52:58.254876 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-gl6hc" event={"ID":"1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3","Type":"ContainerDied","Data":"462f8eb9a9230667b134c4446c906ab371db8b60cecce0b15e862a5b9b9556d0"} Feb 16 13:52:58 crc kubenswrapper[4812]: I0216 13:52:58.255524 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="462f8eb9a9230667b134c4446c906ab371db8b60cecce0b15e862a5b9b9556d0" Feb 16 13:52:58 crc kubenswrapper[4812]: I0216 13:52:58.280852 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbnwd\" (UniqueName: \"kubernetes.io/projected/1eb07864-3ace-404d-b092-271e2a57e677-kube-api-access-fbnwd\") pod \"dnsmasq-dns-698758b865-jbrfm\" (UID: \"1eb07864-3ace-404d-b092-271e2a57e677\") " pod="openstack/dnsmasq-dns-698758b865-jbrfm" Feb 16 13:52:58 crc kubenswrapper[4812]: I0216 13:52:58.299423 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-gl6hc" Feb 16 13:52:58 crc kubenswrapper[4812]: I0216 13:52:58.445540 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3-config\") pod \"1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3\" (UID: \"1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3\") " Feb 16 13:52:58 crc kubenswrapper[4812]: I0216 13:52:58.446165 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chgwz\" (UniqueName: \"kubernetes.io/projected/1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3-kube-api-access-chgwz\") pod \"1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3\" (UID: \"1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3\") " Feb 16 13:52:58 crc kubenswrapper[4812]: I0216 13:52:58.446198 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3-dns-svc\") pod \"1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3\" (UID: \"1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3\") " Feb 16 13:52:58 crc kubenswrapper[4812]: I0216 13:52:58.476776 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3-kube-api-access-chgwz" (OuterVolumeSpecName: "kube-api-access-chgwz") pod "1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3" (UID: "1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3"). InnerVolumeSpecName "kube-api-access-chgwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:52:58 crc kubenswrapper[4812]: I0216 13:52:58.492131 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3" (UID: "1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:52:58 crc kubenswrapper[4812]: I0216 13:52:58.520092 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3-config" (OuterVolumeSpecName: "config") pod "1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3" (UID: "1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:52:58 crc kubenswrapper[4812]: I0216 13:52:58.551127 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-jbrfm" Feb 16 13:52:58 crc kubenswrapper[4812]: I0216 13:52:58.557144 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:52:58 crc kubenswrapper[4812]: I0216 13:52:58.557213 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chgwz\" (UniqueName: \"kubernetes.io/projected/1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3-kube-api-access-chgwz\") on node \"crc\" DevicePath \"\"" Feb 16 13:52:58 crc kubenswrapper[4812]: I0216 13:52:58.557231 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:52:58 crc kubenswrapper[4812]: I0216 13:52:58.590352 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 16 13:52:58 crc kubenswrapper[4812]: I0216 13:52:58.866203 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-gk6gl"] Feb 16 13:52:58 crc kubenswrapper[4812]: I0216 13:52:58.892547 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-8jm9c" Feb 16 13:52:58 crc kubenswrapper[4812]: W0216 13:52:58.898645 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1630a179_1d00_4556_a867_72b31cd916fe.slice/crio-30cb0a67ec125527c63e9ef1dde4956eb1a9d93651d1ebc2a1510ddf9302e07e WatchSource:0}: Error finding container 30cb0a67ec125527c63e9ef1dde4956eb1a9d93651d1ebc2a1510ddf9302e07e: Status 404 returned error can't find the container with id 30cb0a67ec125527c63e9ef1dde4956eb1a9d93651d1ebc2a1510ddf9302e07e Feb 16 13:52:58 crc kubenswrapper[4812]: I0216 13:52:58.989881 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 16 13:52:59 crc kubenswrapper[4812]: I0216 13:52:59.044870 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkxbb\" (UniqueName: \"kubernetes.io/projected/c4702718-14ef-4b62-acfa-016d0a04a952-kube-api-access-nkxbb\") pod \"c4702718-14ef-4b62-acfa-016d0a04a952\" (UID: \"c4702718-14ef-4b62-acfa-016d0a04a952\") " Feb 16 13:52:59 crc kubenswrapper[4812]: I0216 13:52:59.044982 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4702718-14ef-4b62-acfa-016d0a04a952-dns-svc\") pod \"c4702718-14ef-4b62-acfa-016d0a04a952\" (UID: \"c4702718-14ef-4b62-acfa-016d0a04a952\") " Feb 16 13:52:59 crc kubenswrapper[4812]: I0216 13:52:59.045138 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4702718-14ef-4b62-acfa-016d0a04a952-config\") pod \"c4702718-14ef-4b62-acfa-016d0a04a952\" (UID: \"c4702718-14ef-4b62-acfa-016d0a04a952\") " Feb 16 13:52:59 crc kubenswrapper[4812]: I0216 13:52:59.068875 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4702718-14ef-4b62-acfa-016d0a04a952-kube-api-access-nkxbb" (OuterVolumeSpecName: "kube-api-access-nkxbb") pod "c4702718-14ef-4b62-acfa-016d0a04a952" (UID: "c4702718-14ef-4b62-acfa-016d0a04a952"). InnerVolumeSpecName "kube-api-access-nkxbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:52:59 crc kubenswrapper[4812]: I0216 13:52:59.152068 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 16 13:52:59 crc kubenswrapper[4812]: I0216 13:52:59.156379 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkxbb\" (UniqueName: \"kubernetes.io/projected/c4702718-14ef-4b62-acfa-016d0a04a952-kube-api-access-nkxbb\") on node \"crc\" DevicePath \"\"" Feb 16 13:52:59 crc kubenswrapper[4812]: I0216 13:52:59.163036 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4702718-14ef-4b62-acfa-016d0a04a952-config" (OuterVolumeSpecName: "config") pod "c4702718-14ef-4b62-acfa-016d0a04a952" (UID: "c4702718-14ef-4b62-acfa-016d0a04a952"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:52:59 crc kubenswrapper[4812]: I0216 13:52:59.164331 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4702718-14ef-4b62-acfa-016d0a04a952-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c4702718-14ef-4b62-acfa-016d0a04a952" (UID: "c4702718-14ef-4b62-acfa-016d0a04a952"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:52:59 crc kubenswrapper[4812]: W0216 13:52:59.170974 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b3ef2f4_5a54_4fbd_9ecf_de1f0174095c.slice/crio-f3648ac428ab624f0364bb3098cfd84c773ced6d1f8184d4bb50b61ddb6ae952 WatchSource:0}: Error finding container f3648ac428ab624f0364bb3098cfd84c773ced6d1f8184d4bb50b61ddb6ae952: Status 404 returned error can't find the container with id f3648ac428ab624f0364bb3098cfd84c773ced6d1f8184d4bb50b61ddb6ae952 Feb 16 13:52:59 crc kubenswrapper[4812]: I0216 13:52:59.261367 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4702718-14ef-4b62-acfa-016d0a04a952-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:52:59 crc kubenswrapper[4812]: I0216 13:52:59.262462 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4702718-14ef-4b62-acfa-016d0a04a952-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:52:59 crc kubenswrapper[4812]: I0216 13:52:59.264790 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-tcrnd"] Feb 16 13:52:59 crc kubenswrapper[4812]: I0216 13:52:59.279061 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-gk6gl" event={"ID":"1630a179-1d00-4556-a867-72b31cd916fe","Type":"ContainerStarted","Data":"30cb0a67ec125527c63e9ef1dde4956eb1a9d93651d1ebc2a1510ddf9302e07e"} Feb 16 13:52:59 crc kubenswrapper[4812]: I0216 13:52:59.282605 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-8jm9c" Feb 16 13:52:59 crc kubenswrapper[4812]: I0216 13:52:59.282968 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-8jm9c" event={"ID":"c4702718-14ef-4b62-acfa-016d0a04a952","Type":"ContainerDied","Data":"d87335e405646b370b1f784b066f94cff86b34009da250601619e6667f8a8c59"} Feb 16 13:52:59 crc kubenswrapper[4812]: I0216 13:52:59.283092 4812 scope.go:117] "RemoveContainer" containerID="0d5e1551841bde7c5c9374e4432e708037375fd2a5abbd5678c57ffb1c31efaa" Feb 16 13:52:59 crc kubenswrapper[4812]: I0216 13:52:59.293142 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c","Type":"ContainerStarted","Data":"f3648ac428ab624f0364bb3098cfd84c773ced6d1f8184d4bb50b61ddb6ae952"} Feb 16 13:52:59 crc kubenswrapper[4812]: I0216 13:52:59.293252 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-gl6hc" Feb 16 13:52:59 crc kubenswrapper[4812]: I0216 13:52:59.477694 4812 scope.go:117] "RemoveContainer" containerID="afd050bd7daa84dc2f123f80a549dbb7ffcaa7742a836165793241a283c950e4" Feb 16 13:52:59 crc kubenswrapper[4812]: I0216 13:52:59.633825 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-jbrfm"] Feb 16 13:52:59 crc kubenswrapper[4812]: I0216 13:52:59.647741 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 16 13:52:59 crc kubenswrapper[4812]: I0216 13:52:59.670354 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gl6hc"] Feb 16 13:52:59 crc kubenswrapper[4812]: I0216 13:52:59.715982 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gl6hc"] Feb 16 13:52:59 crc kubenswrapper[4812]: I0216 13:52:59.793106 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-8jm9c"] Feb 16 13:52:59 crc kubenswrapper[4812]: I0216 13:52:59.812472 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-8jm9c"] Feb 16 13:52:59 crc kubenswrapper[4812]: I0216 13:52:59.872401 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-v9pfn"] Feb 16 13:52:59 crc kubenswrapper[4812]: E0216 13:52:59.873462 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4702718-14ef-4b62-acfa-016d0a04a952" containerName="init" Feb 16 13:52:59 crc kubenswrapper[4812]: I0216 13:52:59.873482 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4702718-14ef-4b62-acfa-016d0a04a952" containerName="init" Feb 16 13:52:59 crc kubenswrapper[4812]: E0216 13:52:59.873546 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4702718-14ef-4b62-acfa-016d0a04a952" containerName="dnsmasq-dns" Feb 16 13:52:59 crc kubenswrapper[4812]: I0216 13:52:59.873554 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4702718-14ef-4b62-acfa-016d0a04a952" containerName="dnsmasq-dns" Feb 16 13:52:59 crc kubenswrapper[4812]: I0216 13:52:59.873870 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4702718-14ef-4b62-acfa-016d0a04a952" containerName="dnsmasq-dns" Feb 16 13:52:59 crc kubenswrapper[4812]: I0216 13:52:59.875681 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-v9pfn" Feb 16 13:52:59 crc kubenswrapper[4812]: I0216 13:52:59.928566 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3" path="/var/lib/kubelet/pods/1fa5b4b0-1881-4c6f-8cf6-60429c6aa1d3/volumes" Feb 16 13:52:59 crc kubenswrapper[4812]: I0216 13:52:59.929597 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4702718-14ef-4b62-acfa-016d0a04a952" path="/var/lib/kubelet/pods/c4702718-14ef-4b62-acfa-016d0a04a952/volumes" Feb 16 13:52:59 crc kubenswrapper[4812]: I0216 13:52:59.932465 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-474b-account-create-update-gsjf7"] Feb 16 13:52:59 crc kubenswrapper[4812]: I0216 13:52:59.935586 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-v9pfn"] Feb 16 13:52:59 crc kubenswrapper[4812]: I0216 13:52:59.935612 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-474b-account-create-update-gsjf7"] Feb 16 13:52:59 crc kubenswrapper[4812]: I0216 13:52:59.935769 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-474b-account-create-update-gsjf7" Feb 16 13:52:59 crc kubenswrapper[4812]: I0216 13:52:59.940011 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 16 13:52:59 crc kubenswrapper[4812]: I0216 13:52:59.959680 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z5m5\" (UniqueName: \"kubernetes.io/projected/239d953a-0da6-460c-8dce-99ff36a1015b-kube-api-access-8z5m5\") pod \"placement-db-create-v9pfn\" (UID: \"239d953a-0da6-460c-8dce-99ff36a1015b\") " pod="openstack/placement-db-create-v9pfn" Feb 16 13:52:59 crc kubenswrapper[4812]: I0216 13:52:59.960561 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/239d953a-0da6-460c-8dce-99ff36a1015b-operator-scripts\") pod \"placement-db-create-v9pfn\" (UID: \"239d953a-0da6-460c-8dce-99ff36a1015b\") " pod="openstack/placement-db-create-v9pfn" Feb 16 13:53:00 crc kubenswrapper[4812]: I0216 13:53:00.063399 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/239d953a-0da6-460c-8dce-99ff36a1015b-operator-scripts\") pod \"placement-db-create-v9pfn\" (UID: \"239d953a-0da6-460c-8dce-99ff36a1015b\") " pod="openstack/placement-db-create-v9pfn" Feb 16 13:53:00 crc kubenswrapper[4812]: I0216 13:53:00.063686 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zhd2\" (UniqueName: \"kubernetes.io/projected/3054c9c5-945c-43d4-a2c5-adcc6d116329-kube-api-access-7zhd2\") pod \"placement-474b-account-create-update-gsjf7\" (UID: \"3054c9c5-945c-43d4-a2c5-adcc6d116329\") " pod="openstack/placement-474b-account-create-update-gsjf7" Feb 16 13:53:00 crc kubenswrapper[4812]: I0216 13:53:00.063762 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3054c9c5-945c-43d4-a2c5-adcc6d116329-operator-scripts\") pod \"placement-474b-account-create-update-gsjf7\" (UID: \"3054c9c5-945c-43d4-a2c5-adcc6d116329\") " pod="openstack/placement-474b-account-create-update-gsjf7" Feb 16 13:53:00 crc kubenswrapper[4812]: I0216 13:53:00.063916 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z5m5\" (UniqueName: \"kubernetes.io/projected/239d953a-0da6-460c-8dce-99ff36a1015b-kube-api-access-8z5m5\") pod \"placement-db-create-v9pfn\" (UID: \"239d953a-0da6-460c-8dce-99ff36a1015b\") " pod="openstack/placement-db-create-v9pfn" Feb 16 13:53:00 crc kubenswrapper[4812]: I0216 13:53:00.065936 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/239d953a-0da6-460c-8dce-99ff36a1015b-operator-scripts\") pod \"placement-db-create-v9pfn\" (UID: \"239d953a-0da6-460c-8dce-99ff36a1015b\") " pod="openstack/placement-db-create-v9pfn" Feb 16 13:53:00 crc kubenswrapper[4812]: I0216 13:53:00.091409 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z5m5\" (UniqueName: \"kubernetes.io/projected/239d953a-0da6-460c-8dce-99ff36a1015b-kube-api-access-8z5m5\") pod \"placement-db-create-v9pfn\" (UID: \"239d953a-0da6-460c-8dce-99ff36a1015b\") " pod="openstack/placement-db-create-v9pfn" Feb 16 13:53:00 crc kubenswrapper[4812]: I0216 13:53:00.165532 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zhd2\" (UniqueName: \"kubernetes.io/projected/3054c9c5-945c-43d4-a2c5-adcc6d116329-kube-api-access-7zhd2\") pod \"placement-474b-account-create-update-gsjf7\" (UID: \"3054c9c5-945c-43d4-a2c5-adcc6d116329\") " pod="openstack/placement-474b-account-create-update-gsjf7" Feb 16 13:53:00 crc kubenswrapper[4812]: I0216 13:53:00.165592 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3054c9c5-945c-43d4-a2c5-adcc6d116329-operator-scripts\") pod \"placement-474b-account-create-update-gsjf7\" (UID: \"3054c9c5-945c-43d4-a2c5-adcc6d116329\") " pod="openstack/placement-474b-account-create-update-gsjf7" Feb 16 13:53:00 crc kubenswrapper[4812]: I0216 13:53:00.166423 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3054c9c5-945c-43d4-a2c5-adcc6d116329-operator-scripts\") pod \"placement-474b-account-create-update-gsjf7\" (UID: \"3054c9c5-945c-43d4-a2c5-adcc6d116329\") " pod="openstack/placement-474b-account-create-update-gsjf7" Feb 16 13:53:00 crc kubenswrapper[4812]: I0216 13:53:00.188131 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zhd2\" (UniqueName: \"kubernetes.io/projected/3054c9c5-945c-43d4-a2c5-adcc6d116329-kube-api-access-7zhd2\") pod \"placement-474b-account-create-update-gsjf7\" (UID: \"3054c9c5-945c-43d4-a2c5-adcc6d116329\") " pod="openstack/placement-474b-account-create-update-gsjf7" Feb 16 13:53:00 crc kubenswrapper[4812]: I0216 13:53:00.258911 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-v9pfn" Feb 16 13:53:00 crc kubenswrapper[4812]: I0216 13:53:00.281853 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-474b-account-create-update-gsjf7" Feb 16 13:53:00 crc kubenswrapper[4812]: I0216 13:53:00.313397 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-tcrnd" event={"ID":"56d45d6a-4e06-471e-bdc8-60d60af85545","Type":"ContainerStarted","Data":"a462b1a3fed8e872bc53bf325ab377f9f26228f363311b5cdd669b08f646b775"} Feb 16 13:53:00 crc kubenswrapper[4812]: I0216 13:53:00.315275 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-tcrnd" event={"ID":"56d45d6a-4e06-471e-bdc8-60d60af85545","Type":"ContainerStarted","Data":"40a7bd58723b314287e502d8376bb196145d647178a57e82e76bbe87c3b0a606"} Feb 16 13:53:00 crc kubenswrapper[4812]: I0216 13:53:00.318349 4812 generic.go:334] "Generic (PLEG): container finished" podID="1630a179-1d00-4556-a867-72b31cd916fe" containerID="94d9485495eb074fb35c657e5e83e08983103e37ea6ad8b30be801e90be5dbd4" exitCode=0 Feb 16 13:53:00 crc kubenswrapper[4812]: I0216 13:53:00.318434 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-gk6gl" event={"ID":"1630a179-1d00-4556-a867-72b31cd916fe","Type":"ContainerDied","Data":"94d9485495eb074fb35c657e5e83e08983103e37ea6ad8b30be801e90be5dbd4"} Feb 16 13:53:00 crc kubenswrapper[4812]: I0216 13:53:00.326390 4812 generic.go:334] "Generic (PLEG): container finished" podID="1eb07864-3ace-404d-b092-271e2a57e677" containerID="a476f4e4afb1d565c800ee28bb8344326e7fd55311c92bc69dba2ffd4b724d15" exitCode=0 Feb 16 13:53:00 crc kubenswrapper[4812]: I0216 13:53:00.327970 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-jbrfm" event={"ID":"1eb07864-3ace-404d-b092-271e2a57e677","Type":"ContainerDied","Data":"a476f4e4afb1d565c800ee28bb8344326e7fd55311c92bc69dba2ffd4b724d15"} Feb 16 13:53:00 crc kubenswrapper[4812]: I0216 13:53:00.328023 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-jbrfm" event={"ID":"1eb07864-3ace-404d-b092-271e2a57e677","Type":"ContainerStarted","Data":"cfcc7d026302b9a65ebc67a6a1c166a162d33f5fbc47b1381efe1aad299b4c42"} Feb 16 13:53:00 crc kubenswrapper[4812]: I0216 13:53:00.432871 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-tcrnd" podStartSLOduration=5.4328501750000004 podStartE2EDuration="5.432850175s" podCreationTimestamp="2026-02-16 13:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:53:00.338087414 +0000 UTC m=+1269.402418135" watchObservedRunningTime="2026-02-16 13:53:00.432850175 +0000 UTC m=+1269.497180876" Feb 16 13:53:00 crc kubenswrapper[4812]: I0216 13:53:00.888172 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-v9pfn"] Feb 16 13:53:01 crc kubenswrapper[4812]: W0216 13:53:01.575003 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod239d953a_0da6_460c_8dce_99ff36a1015b.slice/crio-4a3345465167e3069d2fcd20786921d00b65861f275b585f663707f355ad0151 WatchSource:0}: Error finding container 4a3345465167e3069d2fcd20786921d00b65861f275b585f663707f355ad0151: Status 404 returned error can't find the container with id 4a3345465167e3069d2fcd20786921d00b65861f275b585f663707f355ad0151 Feb 16 13:53:02 crc kubenswrapper[4812]: I0216 13:53:02.367788 4812 generic.go:334] "Generic (PLEG): container finished" podID="96cb02af-deed-4da5-96cf-28d69592caed" containerID="800f921e1c83753daea77f0b0be9cf51ebc51e6f4f8b82276eab7f955e87c5c2" exitCode=0 Feb 16 13:53:02 crc kubenswrapper[4812]: I0216 13:53:02.367895 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"96cb02af-deed-4da5-96cf-28d69592caed","Type":"ContainerDied","Data":"800f921e1c83753daea77f0b0be9cf51ebc51e6f4f8b82276eab7f955e87c5c2"} Feb 16 13:53:02 crc kubenswrapper[4812]: I0216 13:53:02.370021 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-v9pfn" event={"ID":"239d953a-0da6-460c-8dce-99ff36a1015b","Type":"ContainerStarted","Data":"4a3345465167e3069d2fcd20786921d00b65861f275b585f663707f355ad0151"} Feb 16 13:53:03 crc kubenswrapper[4812]: I0216 13:53:03.565964 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-8fwqw"] Feb 16 13:53:03 crc kubenswrapper[4812]: I0216 13:53:03.567972 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8fwqw" Feb 16 13:53:03 crc kubenswrapper[4812]: I0216 13:53:03.581679 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 16 13:53:03 crc kubenswrapper[4812]: I0216 13:53:03.631077 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-8fwqw"] Feb 16 13:53:03 crc kubenswrapper[4812]: I0216 13:53:03.645937 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-684s8\" (UniqueName: \"kubernetes.io/projected/70f011f3-3d76-41fd-bf20-f16a93df32b7-kube-api-access-684s8\") pod \"root-account-create-update-8fwqw\" (UID: \"70f011f3-3d76-41fd-bf20-f16a93df32b7\") " pod="openstack/root-account-create-update-8fwqw" Feb 16 13:53:03 crc kubenswrapper[4812]: I0216 13:53:03.646105 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70f011f3-3d76-41fd-bf20-f16a93df32b7-operator-scripts\") pod \"root-account-create-update-8fwqw\" (UID: \"70f011f3-3d76-41fd-bf20-f16a93df32b7\") " pod="openstack/root-account-create-update-8fwqw" Feb 16 13:53:03 crc kubenswrapper[4812]: I0216 13:53:03.694879 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-474b-account-create-update-gsjf7"] Feb 16 13:53:03 crc kubenswrapper[4812]: W0216 13:53:03.703276 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3054c9c5_945c_43d4_a2c5_adcc6d116329.slice/crio-1546df5434fd632a2fdded3884e51c3e7a6cadb4846e4d9a821387563e406554 WatchSource:0}: Error finding container 1546df5434fd632a2fdded3884e51c3e7a6cadb4846e4d9a821387563e406554: Status 404 returned error can't find the container with id 1546df5434fd632a2fdded3884e51c3e7a6cadb4846e4d9a821387563e406554 Feb 16 13:53:03 crc kubenswrapper[4812]: I0216 13:53:03.749398 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-684s8\" (UniqueName: \"kubernetes.io/projected/70f011f3-3d76-41fd-bf20-f16a93df32b7-kube-api-access-684s8\") pod \"root-account-create-update-8fwqw\" (UID: \"70f011f3-3d76-41fd-bf20-f16a93df32b7\") " pod="openstack/root-account-create-update-8fwqw" Feb 16 13:53:03 crc kubenswrapper[4812]: I0216 13:53:03.749604 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70f011f3-3d76-41fd-bf20-f16a93df32b7-operator-scripts\") pod \"root-account-create-update-8fwqw\" (UID: \"70f011f3-3d76-41fd-bf20-f16a93df32b7\") " pod="openstack/root-account-create-update-8fwqw" Feb 16 13:53:03 crc kubenswrapper[4812]: I0216 13:53:03.751013 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70f011f3-3d76-41fd-bf20-f16a93df32b7-operator-scripts\") pod \"root-account-create-update-8fwqw\" (UID: \"70f011f3-3d76-41fd-bf20-f16a93df32b7\") " pod="openstack/root-account-create-update-8fwqw" Feb 16 13:53:03 crc kubenswrapper[4812]: I0216 13:53:03.773392 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-684s8\" (UniqueName: \"kubernetes.io/projected/70f011f3-3d76-41fd-bf20-f16a93df32b7-kube-api-access-684s8\") pod \"root-account-create-update-8fwqw\" (UID: \"70f011f3-3d76-41fd-bf20-f16a93df32b7\") " pod="openstack/root-account-create-update-8fwqw" Feb 16 13:53:03 crc kubenswrapper[4812]: I0216 13:53:03.802513 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8fwqw" Feb 16 13:53:03 crc kubenswrapper[4812]: I0216 13:53:03.870873 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7cb5889db5-8jm9c" podUID="c4702718-14ef-4b62-acfa-016d0a04a952" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: i/o timeout" Feb 16 13:53:04 crc kubenswrapper[4812]: I0216 13:53:04.403344 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-474b-account-create-update-gsjf7" event={"ID":"3054c9c5-945c-43d4-a2c5-adcc6d116329","Type":"ContainerStarted","Data":"31b62e22433d87c016c20b14823ea928fcb93820167abd7a9030d9504f64e34e"} Feb 16 13:53:04 crc kubenswrapper[4812]: I0216 13:53:04.404930 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-474b-account-create-update-gsjf7" event={"ID":"3054c9c5-945c-43d4-a2c5-adcc6d116329","Type":"ContainerStarted","Data":"1546df5434fd632a2fdded3884e51c3e7a6cadb4846e4d9a821387563e406554"} Feb 16 13:53:04 crc kubenswrapper[4812]: I0216 13:53:04.414961 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dkwvj" event={"ID":"3e7d63b8-7d3a-4169-b939-2ea11895b53a","Type":"ContainerStarted","Data":"f5c4b9f533db50e25dcb57e3c193cbc1272645f8cc4f0269a5c0bdcf9345e8e0"} Feb 16 13:53:04 crc kubenswrapper[4812]: I0216 13:53:04.420906 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c","Type":"ContainerStarted","Data":"7d9971babc4eb765cdd592889c44182cabe9acc6c5dd70cebbd0494375f9da4f"} Feb 16 13:53:04 crc kubenswrapper[4812]: I0216 13:53:04.427991 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-474b-account-create-update-gsjf7" podStartSLOduration=5.42796044 podStartE2EDuration="5.42796044s" podCreationTimestamp="2026-02-16 13:52:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:53:04.424087528 +0000 UTC m=+1273.488418229" watchObservedRunningTime="2026-02-16 13:53:04.42796044 +0000 UTC m=+1273.492291141" Feb 16 13:53:04 crc kubenswrapper[4812]: I0216 13:53:04.433738 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-gk6gl" event={"ID":"1630a179-1d00-4556-a867-72b31cd916fe","Type":"ContainerStarted","Data":"5fa49f3d2408c9afc0b4541f429eec56463105e2959addded664399aac98a32a"} Feb 16 13:53:04 crc kubenswrapper[4812]: I0216 13:53:04.433856 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c89d5d749-gk6gl" Feb 16 13:53:04 crc kubenswrapper[4812]: I0216 13:53:04.448668 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-v9pfn" event={"ID":"239d953a-0da6-460c-8dce-99ff36a1015b","Type":"ContainerDied","Data":"07bbd2e2ce9e6f3368748dea83a509ea68554777c5c6f36e0304ca5d77e69d60"} Feb 16 13:53:04 crc kubenswrapper[4812]: I0216 13:53:04.448826 4812 generic.go:334] "Generic (PLEG): container finished" podID="239d953a-0da6-460c-8dce-99ff36a1015b" containerID="07bbd2e2ce9e6f3368748dea83a509ea68554777c5c6f36e0304ca5d77e69d60" exitCode=0 Feb 16 13:53:04 crc kubenswrapper[4812]: I0216 13:53:04.466525 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-jbrfm" event={"ID":"1eb07864-3ace-404d-b092-271e2a57e677","Type":"ContainerStarted","Data":"17efbac5d5e1ebaf817d9c9a8fe12168b35af20f11dd517db48a028d31271a3a"} Feb 16 13:53:04 crc kubenswrapper[4812]: I0216 13:53:04.467553 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-jbrfm" Feb 16 13:53:04 crc kubenswrapper[4812]: I0216 13:53:04.476635 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-dkwvj" podStartSLOduration=2.256007869 podStartE2EDuration="11.476607852s" podCreationTimestamp="2026-02-16 13:52:53 +0000 UTC" firstStartedPulling="2026-02-16 13:52:54.42922408 +0000 UTC m=+1263.493554781" lastFinishedPulling="2026-02-16 13:53:03.649824063 +0000 UTC m=+1272.714154764" observedRunningTime="2026-02-16 13:53:04.460194419 +0000 UTC m=+1273.524525140" watchObservedRunningTime="2026-02-16 13:53:04.476607852 +0000 UTC m=+1273.540938553" Feb 16 13:53:04 crc kubenswrapper[4812]: I0216 13:53:04.513883 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-8fwqw"] Feb 16 13:53:04 crc kubenswrapper[4812]: I0216 13:53:04.531306 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6c89d5d749-gk6gl" podStartSLOduration=9.531264227 podStartE2EDuration="9.531264227s" podCreationTimestamp="2026-02-16 13:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:53:04.491707807 +0000 UTC m=+1273.556038508" watchObservedRunningTime="2026-02-16 13:53:04.531264227 +0000 UTC m=+1273.595594928" Feb 16 13:53:04 crc kubenswrapper[4812]: I0216 13:53:04.588757 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-jbrfm" podStartSLOduration=9.588720123 podStartE2EDuration="9.588720123s" podCreationTimestamp="2026-02-16 13:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:53:04.566943685 +0000 UTC m=+1273.631274386" watchObservedRunningTime="2026-02-16 13:53:04.588720123 +0000 UTC m=+1273.653050824" Feb 16 13:53:05 crc kubenswrapper[4812]: I0216 13:53:05.491396 4812 generic.go:334] "Generic (PLEG): container finished" podID="70f011f3-3d76-41fd-bf20-f16a93df32b7" containerID="7b60975c6cf3122e703aa830322893a78a35864c8197a1b883e66c3f41e8d577" exitCode=0 Feb 16 13:53:05 crc kubenswrapper[4812]: I0216 13:53:05.492815 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8fwqw" event={"ID":"70f011f3-3d76-41fd-bf20-f16a93df32b7","Type":"ContainerDied","Data":"7b60975c6cf3122e703aa830322893a78a35864c8197a1b883e66c3f41e8d577"} Feb 16 13:53:05 crc kubenswrapper[4812]: I0216 13:53:05.492902 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8fwqw" event={"ID":"70f011f3-3d76-41fd-bf20-f16a93df32b7","Type":"ContainerStarted","Data":"2acc4c8fd13f92031adc746d04bf3e445228ed5cde31939f2876f69adf58a952"} Feb 16 13:53:05 crc kubenswrapper[4812]: I0216 13:53:05.497596 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c","Type":"ContainerStarted","Data":"e2b719c9ef69cb21b627a6e79848663492495baa3f30b41bf630654ebdbcb879"} Feb 16 13:53:05 crc kubenswrapper[4812]: I0216 13:53:05.498059 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 16 13:53:05 crc kubenswrapper[4812]: I0216 13:53:05.500867 4812 generic.go:334] "Generic (PLEG): container finished" podID="3054c9c5-945c-43d4-a2c5-adcc6d116329" containerID="31b62e22433d87c016c20b14823ea928fcb93820167abd7a9030d9504f64e34e" exitCode=0 Feb 16 13:53:05 crc kubenswrapper[4812]: I0216 13:53:05.501047 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-474b-account-create-update-gsjf7" event={"ID":"3054c9c5-945c-43d4-a2c5-adcc6d116329","Type":"ContainerDied","Data":"31b62e22433d87c016c20b14823ea928fcb93820167abd7a9030d9504f64e34e"} Feb 16 13:53:05 crc kubenswrapper[4812]: I0216 13:53:05.557818 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=6.085106883 podStartE2EDuration="10.557788283s" podCreationTimestamp="2026-02-16 13:52:55 +0000 UTC" firstStartedPulling="2026-02-16 13:52:59.204400869 +0000 UTC m=+1268.268731570" lastFinishedPulling="2026-02-16 13:53:03.677082259 +0000 UTC m=+1272.741412970" observedRunningTime="2026-02-16 13:53:05.549400762 +0000 UTC m=+1274.613731483" watchObservedRunningTime="2026-02-16 13:53:05.557788283 +0000 UTC m=+1274.622118984" Feb 16 13:53:06 crc kubenswrapper[4812]: I0216 13:53:06.147833 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7f34d582-3b55-4d2a-91b3-c64acd57981f-etc-swift\") pod \"swift-storage-0\" (UID: \"7f34d582-3b55-4d2a-91b3-c64acd57981f\") " pod="openstack/swift-storage-0" Feb 16 13:53:06 crc kubenswrapper[4812]: E0216 13:53:06.148112 4812 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 13:53:06 crc kubenswrapper[4812]: E0216 13:53:06.148541 4812 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 13:53:06 crc kubenswrapper[4812]: E0216 13:53:06.148624 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f34d582-3b55-4d2a-91b3-c64acd57981f-etc-swift podName:7f34d582-3b55-4d2a-91b3-c64acd57981f nodeName:}" failed. No retries permitted until 2026-02-16 13:53:22.148599572 +0000 UTC m=+1291.212930263 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7f34d582-3b55-4d2a-91b3-c64acd57981f-etc-swift") pod "swift-storage-0" (UID: "7f34d582-3b55-4d2a-91b3-c64acd57981f") : configmap "swift-ring-files" not found Feb 16 13:53:06 crc kubenswrapper[4812]: I0216 13:53:06.244672 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-v9pfn" Feb 16 13:53:06 crc kubenswrapper[4812]: I0216 13:53:06.352937 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/239d953a-0da6-460c-8dce-99ff36a1015b-operator-scripts\") pod \"239d953a-0da6-460c-8dce-99ff36a1015b\" (UID: \"239d953a-0da6-460c-8dce-99ff36a1015b\") " Feb 16 13:53:06 crc kubenswrapper[4812]: I0216 13:53:06.353414 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8z5m5\" (UniqueName: \"kubernetes.io/projected/239d953a-0da6-460c-8dce-99ff36a1015b-kube-api-access-8z5m5\") pod \"239d953a-0da6-460c-8dce-99ff36a1015b\" (UID: \"239d953a-0da6-460c-8dce-99ff36a1015b\") " Feb 16 13:53:06 crc kubenswrapper[4812]: I0216 13:53:06.353890 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/239d953a-0da6-460c-8dce-99ff36a1015b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "239d953a-0da6-460c-8dce-99ff36a1015b" (UID: "239d953a-0da6-460c-8dce-99ff36a1015b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:53:06 crc kubenswrapper[4812]: I0216 13:53:06.354025 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/239d953a-0da6-460c-8dce-99ff36a1015b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:06 crc kubenswrapper[4812]: I0216 13:53:06.362689 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/239d953a-0da6-460c-8dce-99ff36a1015b-kube-api-access-8z5m5" (OuterVolumeSpecName: "kube-api-access-8z5m5") pod "239d953a-0da6-460c-8dce-99ff36a1015b" (UID: "239d953a-0da6-460c-8dce-99ff36a1015b"). InnerVolumeSpecName "kube-api-access-8z5m5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:53:06 crc kubenswrapper[4812]: I0216 13:53:06.456365 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8z5m5\" (UniqueName: \"kubernetes.io/projected/239d953a-0da6-460c-8dce-99ff36a1015b-kube-api-access-8z5m5\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:06 crc kubenswrapper[4812]: I0216 13:53:06.514970 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-v9pfn" Feb 16 13:53:06 crc kubenswrapper[4812]: I0216 13:53:06.515035 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-v9pfn" event={"ID":"239d953a-0da6-460c-8dce-99ff36a1015b","Type":"ContainerDied","Data":"4a3345465167e3069d2fcd20786921d00b65861f275b585f663707f355ad0151"} Feb 16 13:53:06 crc kubenswrapper[4812]: I0216 13:53:06.515592 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a3345465167e3069d2fcd20786921d00b65861f275b585f663707f355ad0151" Feb 16 13:53:06 crc kubenswrapper[4812]: I0216 13:53:06.518410 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"96cb02af-deed-4da5-96cf-28d69592caed","Type":"ContainerStarted","Data":"889317c46f5c47bb98c6254169f431669dcd79a7ed234295a3472d3f6b6f4e1c"} Feb 16 13:53:06 crc kubenswrapper[4812]: I0216 13:53:06.937948 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-474b-account-create-update-gsjf7" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.001392 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8fwqw" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.075468 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3054c9c5-945c-43d4-a2c5-adcc6d116329-operator-scripts\") pod \"3054c9c5-945c-43d4-a2c5-adcc6d116329\" (UID: \"3054c9c5-945c-43d4-a2c5-adcc6d116329\") " Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.075644 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-684s8\" (UniqueName: \"kubernetes.io/projected/70f011f3-3d76-41fd-bf20-f16a93df32b7-kube-api-access-684s8\") pod \"70f011f3-3d76-41fd-bf20-f16a93df32b7\" (UID: \"70f011f3-3d76-41fd-bf20-f16a93df32b7\") " Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.075747 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zhd2\" (UniqueName: \"kubernetes.io/projected/3054c9c5-945c-43d4-a2c5-adcc6d116329-kube-api-access-7zhd2\") pod \"3054c9c5-945c-43d4-a2c5-adcc6d116329\" (UID: \"3054c9c5-945c-43d4-a2c5-adcc6d116329\") " Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.075764 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70f011f3-3d76-41fd-bf20-f16a93df32b7-operator-scripts\") pod \"70f011f3-3d76-41fd-bf20-f16a93df32b7\" (UID: \"70f011f3-3d76-41fd-bf20-f16a93df32b7\") " Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.076772 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3054c9c5-945c-43d4-a2c5-adcc6d116329-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3054c9c5-945c-43d4-a2c5-adcc6d116329" (UID: "3054c9c5-945c-43d4-a2c5-adcc6d116329"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.077530 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70f011f3-3d76-41fd-bf20-f16a93df32b7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "70f011f3-3d76-41fd-bf20-f16a93df32b7" (UID: "70f011f3-3d76-41fd-bf20-f16a93df32b7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.082769 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3054c9c5-945c-43d4-a2c5-adcc6d116329-kube-api-access-7zhd2" (OuterVolumeSpecName: "kube-api-access-7zhd2") pod "3054c9c5-945c-43d4-a2c5-adcc6d116329" (UID: "3054c9c5-945c-43d4-a2c5-adcc6d116329"). InnerVolumeSpecName "kube-api-access-7zhd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.083345 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70f011f3-3d76-41fd-bf20-f16a93df32b7-kube-api-access-684s8" (OuterVolumeSpecName: "kube-api-access-684s8") pod "70f011f3-3d76-41fd-bf20-f16a93df32b7" (UID: "70f011f3-3d76-41fd-bf20-f16a93df32b7"). InnerVolumeSpecName "kube-api-access-684s8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.179925 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3054c9c5-945c-43d4-a2c5-adcc6d116329-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.181537 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-684s8\" (UniqueName: \"kubernetes.io/projected/70f011f3-3d76-41fd-bf20-f16a93df32b7-kube-api-access-684s8\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.181619 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zhd2\" (UniqueName: \"kubernetes.io/projected/3054c9c5-945c-43d4-a2c5-adcc6d116329-kube-api-access-7zhd2\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.181679 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70f011f3-3d76-41fd-bf20-f16a93df32b7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.333710 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-p2qvq"] Feb 16 13:53:07 crc kubenswrapper[4812]: E0216 13:53:07.335183 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70f011f3-3d76-41fd-bf20-f16a93df32b7" containerName="mariadb-account-create-update" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.335241 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="70f011f3-3d76-41fd-bf20-f16a93df32b7" containerName="mariadb-account-create-update" Feb 16 13:53:07 crc kubenswrapper[4812]: E0216 13:53:07.335280 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3054c9c5-945c-43d4-a2c5-adcc6d116329" containerName="mariadb-account-create-update" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.335291 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="3054c9c5-945c-43d4-a2c5-adcc6d116329" containerName="mariadb-account-create-update" Feb 16 13:53:07 crc kubenswrapper[4812]: E0216 13:53:07.335309 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="239d953a-0da6-460c-8dce-99ff36a1015b" containerName="mariadb-database-create" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.335317 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="239d953a-0da6-460c-8dce-99ff36a1015b" containerName="mariadb-database-create" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.335702 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="3054c9c5-945c-43d4-a2c5-adcc6d116329" containerName="mariadb-account-create-update" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.335735 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="70f011f3-3d76-41fd-bf20-f16a93df32b7" containerName="mariadb-account-create-update" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.335765 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="239d953a-0da6-460c-8dce-99ff36a1015b" containerName="mariadb-database-create" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.337904 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-p2qvq" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.351605 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-3a73-account-create-update-tljwz"] Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.354810 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3a73-account-create-update-tljwz" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.358369 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.365348 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-p2qvq"] Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.377151 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-3a73-account-create-update-tljwz"] Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.519305 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-8t48r"] Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.521566 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8t48r" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.532658 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-112b-account-create-update-vwnq8"] Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.535249 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-112b-account-create-update-vwnq8" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.542763 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.549137 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-474b-account-create-update-gsjf7" event={"ID":"3054c9c5-945c-43d4-a2c5-adcc6d116329","Type":"ContainerDied","Data":"1546df5434fd632a2fdded3884e51c3e7a6cadb4846e4d9a821387563e406554"} Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.549399 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1546df5434fd632a2fdded3884e51c3e7a6cadb4846e4d9a821387563e406554" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.549276 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-474b-account-create-update-gsjf7" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.555053 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-8t48r"] Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.555463 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bdeb7c5-c26e-4de3-ae86-8520e5baf9fb-operator-scripts\") pod \"glance-3a73-account-create-update-tljwz\" (UID: \"0bdeb7c5-c26e-4de3-ae86-8520e5baf9fb\") " pod="openstack/glance-3a73-account-create-update-tljwz" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.555543 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmp7m\" (UniqueName: \"kubernetes.io/projected/0bdeb7c5-c26e-4de3-ae86-8520e5baf9fb-kube-api-access-nmp7m\") pod \"glance-3a73-account-create-update-tljwz\" (UID: \"0bdeb7c5-c26e-4de3-ae86-8520e5baf9fb\") " pod="openstack/glance-3a73-account-create-update-tljwz" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.555651 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4swc\" (UniqueName: \"kubernetes.io/projected/9635671b-a1ee-4374-8487-c492616a699b-kube-api-access-v4swc\") pod \"glance-db-create-p2qvq\" (UID: \"9635671b-a1ee-4374-8487-c492616a699b\") " pod="openstack/glance-db-create-p2qvq" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.555748 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9635671b-a1ee-4374-8487-c492616a699b-operator-scripts\") pod \"glance-db-create-p2qvq\" (UID: \"9635671b-a1ee-4374-8487-c492616a699b\") " pod="openstack/glance-db-create-p2qvq" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.564423 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8fwqw" event={"ID":"70f011f3-3d76-41fd-bf20-f16a93df32b7","Type":"ContainerDied","Data":"2acc4c8fd13f92031adc746d04bf3e445228ed5cde31939f2876f69adf58a952"} Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.564573 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2acc4c8fd13f92031adc746d04bf3e445228ed5cde31939f2876f69adf58a952" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.564780 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8fwqw" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.571833 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-112b-account-create-update-vwnq8"] Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.659156 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bdeb7c5-c26e-4de3-ae86-8520e5baf9fb-operator-scripts\") pod \"glance-3a73-account-create-update-tljwz\" (UID: \"0bdeb7c5-c26e-4de3-ae86-8520e5baf9fb\") " pod="openstack/glance-3a73-account-create-update-tljwz" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.659252 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmp7m\" (UniqueName: \"kubernetes.io/projected/0bdeb7c5-c26e-4de3-ae86-8520e5baf9fb-kube-api-access-nmp7m\") pod \"glance-3a73-account-create-update-tljwz\" (UID: \"0bdeb7c5-c26e-4de3-ae86-8520e5baf9fb\") " pod="openstack/glance-3a73-account-create-update-tljwz" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.659335 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4swc\" (UniqueName: \"kubernetes.io/projected/9635671b-a1ee-4374-8487-c492616a699b-kube-api-access-v4swc\") pod \"glance-db-create-p2qvq\" (UID: \"9635671b-a1ee-4374-8487-c492616a699b\") " pod="openstack/glance-db-create-p2qvq" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.659384 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgjnm\" (UniqueName: \"kubernetes.io/projected/ec08c9a9-68e9-4615-9375-4511a84ea575-kube-api-access-lgjnm\") pod \"keystone-112b-account-create-update-vwnq8\" (UID: \"ec08c9a9-68e9-4615-9375-4511a84ea575\") " pod="openstack/keystone-112b-account-create-update-vwnq8" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.659428 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9635671b-a1ee-4374-8487-c492616a699b-operator-scripts\") pod \"glance-db-create-p2qvq\" (UID: \"9635671b-a1ee-4374-8487-c492616a699b\") " pod="openstack/glance-db-create-p2qvq" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.659497 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/509696b4-e17f-4d72-99d2-d2a800398fe6-operator-scripts\") pod \"keystone-db-create-8t48r\" (UID: \"509696b4-e17f-4d72-99d2-d2a800398fe6\") " pod="openstack/keystone-db-create-8t48r" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.659526 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec08c9a9-68e9-4615-9375-4511a84ea575-operator-scripts\") pod \"keystone-112b-account-create-update-vwnq8\" (UID: \"ec08c9a9-68e9-4615-9375-4511a84ea575\") " pod="openstack/keystone-112b-account-create-update-vwnq8" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.659573 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sxsf\" (UniqueName: \"kubernetes.io/projected/509696b4-e17f-4d72-99d2-d2a800398fe6-kube-api-access-9sxsf\") pod \"keystone-db-create-8t48r\" (UID: \"509696b4-e17f-4d72-99d2-d2a800398fe6\") " pod="openstack/keystone-db-create-8t48r" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.660567 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bdeb7c5-c26e-4de3-ae86-8520e5baf9fb-operator-scripts\") pod \"glance-3a73-account-create-update-tljwz\" (UID: \"0bdeb7c5-c26e-4de3-ae86-8520e5baf9fb\") " pod="openstack/glance-3a73-account-create-update-tljwz" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.663756 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9635671b-a1ee-4374-8487-c492616a699b-operator-scripts\") pod \"glance-db-create-p2qvq\" (UID: \"9635671b-a1ee-4374-8487-c492616a699b\") " pod="openstack/glance-db-create-p2qvq" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.692692 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmp7m\" (UniqueName: \"kubernetes.io/projected/0bdeb7c5-c26e-4de3-ae86-8520e5baf9fb-kube-api-access-nmp7m\") pod \"glance-3a73-account-create-update-tljwz\" (UID: \"0bdeb7c5-c26e-4de3-ae86-8520e5baf9fb\") " pod="openstack/glance-3a73-account-create-update-tljwz" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.702018 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4swc\" (UniqueName: \"kubernetes.io/projected/9635671b-a1ee-4374-8487-c492616a699b-kube-api-access-v4swc\") pod \"glance-db-create-p2qvq\" (UID: \"9635671b-a1ee-4374-8487-c492616a699b\") " pod="openstack/glance-db-create-p2qvq" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.737910 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3a73-account-create-update-tljwz" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.762118 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgjnm\" (UniqueName: \"kubernetes.io/projected/ec08c9a9-68e9-4615-9375-4511a84ea575-kube-api-access-lgjnm\") pod \"keystone-112b-account-create-update-vwnq8\" (UID: \"ec08c9a9-68e9-4615-9375-4511a84ea575\") " pod="openstack/keystone-112b-account-create-update-vwnq8" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.762263 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/509696b4-e17f-4d72-99d2-d2a800398fe6-operator-scripts\") pod \"keystone-db-create-8t48r\" (UID: \"509696b4-e17f-4d72-99d2-d2a800398fe6\") " pod="openstack/keystone-db-create-8t48r" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.762306 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec08c9a9-68e9-4615-9375-4511a84ea575-operator-scripts\") pod \"keystone-112b-account-create-update-vwnq8\" (UID: \"ec08c9a9-68e9-4615-9375-4511a84ea575\") " pod="openstack/keystone-112b-account-create-update-vwnq8" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.762352 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sxsf\" (UniqueName: \"kubernetes.io/projected/509696b4-e17f-4d72-99d2-d2a800398fe6-kube-api-access-9sxsf\") pod \"keystone-db-create-8t48r\" (UID: \"509696b4-e17f-4d72-99d2-d2a800398fe6\") " pod="openstack/keystone-db-create-8t48r" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.763464 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/509696b4-e17f-4d72-99d2-d2a800398fe6-operator-scripts\") pod \"keystone-db-create-8t48r\" (UID: \"509696b4-e17f-4d72-99d2-d2a800398fe6\") " pod="openstack/keystone-db-create-8t48r" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.764220 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec08c9a9-68e9-4615-9375-4511a84ea575-operator-scripts\") pod \"keystone-112b-account-create-update-vwnq8\" (UID: \"ec08c9a9-68e9-4615-9375-4511a84ea575\") " pod="openstack/keystone-112b-account-create-update-vwnq8" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.787297 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgjnm\" (UniqueName: \"kubernetes.io/projected/ec08c9a9-68e9-4615-9375-4511a84ea575-kube-api-access-lgjnm\") pod \"keystone-112b-account-create-update-vwnq8\" (UID: \"ec08c9a9-68e9-4615-9375-4511a84ea575\") " pod="openstack/keystone-112b-account-create-update-vwnq8" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.789543 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sxsf\" (UniqueName: \"kubernetes.io/projected/509696b4-e17f-4d72-99d2-d2a800398fe6-kube-api-access-9sxsf\") pod \"keystone-db-create-8t48r\" (UID: \"509696b4-e17f-4d72-99d2-d2a800398fe6\") " pod="openstack/keystone-db-create-8t48r" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.854012 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8t48r" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.870492 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-112b-account-create-update-vwnq8" Feb 16 13:53:07 crc kubenswrapper[4812]: I0216 13:53:07.965585 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-p2qvq" Feb 16 13:53:08 crc kubenswrapper[4812]: I0216 13:53:08.126717 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-3a73-account-create-update-tljwz"] Feb 16 13:53:08 crc kubenswrapper[4812]: I0216 13:53:08.583031 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3a73-account-create-update-tljwz" event={"ID":"0bdeb7c5-c26e-4de3-ae86-8520e5baf9fb","Type":"ContainerStarted","Data":"5f7dbaf4d4033db918cc642a55971f4088fe1f4d38edb50e035199e3cde72ee1"} Feb 16 13:53:08 crc kubenswrapper[4812]: I0216 13:53:08.721154 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-8t48r"] Feb 16 13:53:08 crc kubenswrapper[4812]: W0216 13:53:08.806555 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec08c9a9_68e9_4615_9375_4511a84ea575.slice/crio-6a6d691511b113f7e3f0bb4526e9baccdd57aefc9f4758cd20db5fc1c7d6baef WatchSource:0}: Error finding container 6a6d691511b113f7e3f0bb4526e9baccdd57aefc9f4758cd20db5fc1c7d6baef: Status 404 returned error can't find the container with id 6a6d691511b113f7e3f0bb4526e9baccdd57aefc9f4758cd20db5fc1c7d6baef Feb 16 13:53:08 crc kubenswrapper[4812]: I0216 13:53:08.811118 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-112b-account-create-update-vwnq8"] Feb 16 13:53:08 crc kubenswrapper[4812]: I0216 13:53:08.872037 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-p2qvq"] Feb 16 13:53:08 crc kubenswrapper[4812]: W0216 13:53:08.883872 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9635671b_a1ee_4374_8487_c492616a699b.slice/crio-df13c28449f777ce4f216842124ee64760492fd84f213d7e14754532a1087823 WatchSource:0}: Error finding container df13c28449f777ce4f216842124ee64760492fd84f213d7e14754532a1087823: Status 404 returned error can't find the container with id df13c28449f777ce4f216842124ee64760492fd84f213d7e14754532a1087823 Feb 16 13:53:09 crc kubenswrapper[4812]: I0216 13:53:09.609210 4812 generic.go:334] "Generic (PLEG): container finished" podID="9635671b-a1ee-4374-8487-c492616a699b" containerID="7e76518be875978fc1307e56bb7011001a59c0a0a727e4aad11b3713a7b20fc1" exitCode=0 Feb 16 13:53:09 crc kubenswrapper[4812]: I0216 13:53:09.609299 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-p2qvq" event={"ID":"9635671b-a1ee-4374-8487-c492616a699b","Type":"ContainerDied","Data":"7e76518be875978fc1307e56bb7011001a59c0a0a727e4aad11b3713a7b20fc1"} Feb 16 13:53:09 crc kubenswrapper[4812]: I0216 13:53:09.609336 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-p2qvq" event={"ID":"9635671b-a1ee-4374-8487-c492616a699b","Type":"ContainerStarted","Data":"df13c28449f777ce4f216842124ee64760492fd84f213d7e14754532a1087823"} Feb 16 13:53:09 crc kubenswrapper[4812]: I0216 13:53:09.612940 4812 generic.go:334] "Generic (PLEG): container finished" podID="0bdeb7c5-c26e-4de3-ae86-8520e5baf9fb" containerID="3afe130fe636a1c04a8ed17bf9c1f9e55a35f252a5ca3114e48a9c3a17d779ca" exitCode=0 Feb 16 13:53:09 crc kubenswrapper[4812]: I0216 13:53:09.613013 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3a73-account-create-update-tljwz" event={"ID":"0bdeb7c5-c26e-4de3-ae86-8520e5baf9fb","Type":"ContainerDied","Data":"3afe130fe636a1c04a8ed17bf9c1f9e55a35f252a5ca3114e48a9c3a17d779ca"} Feb 16 13:53:09 crc kubenswrapper[4812]: I0216 13:53:09.615535 4812 generic.go:334] "Generic (PLEG): container finished" podID="ec08c9a9-68e9-4615-9375-4511a84ea575" containerID="d930b22adca630860f55999176ab9026e0f3be180338420116ccf15e1b1ba6af" exitCode=0 Feb 16 13:53:09 crc kubenswrapper[4812]: I0216 13:53:09.615604 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-112b-account-create-update-vwnq8" event={"ID":"ec08c9a9-68e9-4615-9375-4511a84ea575","Type":"ContainerDied","Data":"d930b22adca630860f55999176ab9026e0f3be180338420116ccf15e1b1ba6af"} Feb 16 13:53:09 crc kubenswrapper[4812]: I0216 13:53:09.615628 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-112b-account-create-update-vwnq8" event={"ID":"ec08c9a9-68e9-4615-9375-4511a84ea575","Type":"ContainerStarted","Data":"6a6d691511b113f7e3f0bb4526e9baccdd57aefc9f4758cd20db5fc1c7d6baef"} Feb 16 13:53:09 crc kubenswrapper[4812]: I0216 13:53:09.619170 4812 generic.go:334] "Generic (PLEG): container finished" podID="509696b4-e17f-4d72-99d2-d2a800398fe6" containerID="a0e0fbf47d3f8d3903a120e88e251d8dc4bb641ae939a839d8b5ad9d120b6042" exitCode=0 Feb 16 13:53:09 crc kubenswrapper[4812]: I0216 13:53:09.619223 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8t48r" event={"ID":"509696b4-e17f-4d72-99d2-d2a800398fe6","Type":"ContainerDied","Data":"a0e0fbf47d3f8d3903a120e88e251d8dc4bb641ae939a839d8b5ad9d120b6042"} Feb 16 13:53:09 crc kubenswrapper[4812]: I0216 13:53:09.619248 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8t48r" event={"ID":"509696b4-e17f-4d72-99d2-d2a800398fe6","Type":"ContainerStarted","Data":"3b15c8fd23d09700a018b7cbfa9bf23aff754ee5f707e4804a75e8eed38dbf49"} Feb 16 13:53:09 crc kubenswrapper[4812]: I0216 13:53:09.903616 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-6xb2f" Feb 16 13:53:10 crc kubenswrapper[4812]: I0216 13:53:10.037388 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-8fwqw"] Feb 16 13:53:10 crc kubenswrapper[4812]: I0216 13:53:10.050103 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-8fwqw"] Feb 16 13:53:10 crc kubenswrapper[4812]: I0216 13:53:10.148748 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-p88ww" Feb 16 13:53:10 crc kubenswrapper[4812]: I0216 13:53:10.226769 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-2l86h" Feb 16 13:53:10 crc kubenswrapper[4812]: I0216 13:53:10.598666 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6c89d5d749-gk6gl" Feb 16 13:53:10 crc kubenswrapper[4812]: I0216 13:53:10.634667 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"96cb02af-deed-4da5-96cf-28d69592caed","Type":"ContainerStarted","Data":"2f4ab73788e3dc79a5683200805d54dcd68c21b5b0255c810944e72584805487"} Feb 16 13:53:10 crc kubenswrapper[4812]: I0216 13:53:10.693961 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=32.99960689 podStartE2EDuration="1m1.693922366s" podCreationTimestamp="2026-02-16 13:52:09 +0000 UTC" firstStartedPulling="2026-02-16 13:52:37.400500526 +0000 UTC m=+1246.464831227" lastFinishedPulling="2026-02-16 13:53:06.094816002 +0000 UTC m=+1275.159146703" observedRunningTime="2026-02-16 13:53:10.678490971 +0000 UTC m=+1279.742821682" watchObservedRunningTime="2026-02-16 13:53:10.693922366 +0000 UTC m=+1279.758253067" Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.003578 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="51f12264-af08-4cf2-9e76-98dc91b0b7a8" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.144186 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.214595 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3a73-account-create-update-tljwz" Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.277462 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.322304 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bdeb7c5-c26e-4de3-ae86-8520e5baf9fb-operator-scripts\") pod \"0bdeb7c5-c26e-4de3-ae86-8520e5baf9fb\" (UID: \"0bdeb7c5-c26e-4de3-ae86-8520e5baf9fb\") " Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.322419 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmp7m\" (UniqueName: \"kubernetes.io/projected/0bdeb7c5-c26e-4de3-ae86-8520e5baf9fb-kube-api-access-nmp7m\") pod \"0bdeb7c5-c26e-4de3-ae86-8520e5baf9fb\" (UID: \"0bdeb7c5-c26e-4de3-ae86-8520e5baf9fb\") " Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.325833 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bdeb7c5-c26e-4de3-ae86-8520e5baf9fb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0bdeb7c5-c26e-4de3-ae86-8520e5baf9fb" (UID: "0bdeb7c5-c26e-4de3-ae86-8520e5baf9fb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.341518 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bdeb7c5-c26e-4de3-ae86-8520e5baf9fb-kube-api-access-nmp7m" (OuterVolumeSpecName: "kube-api-access-nmp7m") pod "0bdeb7c5-c26e-4de3-ae86-8520e5baf9fb" (UID: "0bdeb7c5-c26e-4de3-ae86-8520e5baf9fb"). InnerVolumeSpecName "kube-api-access-nmp7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.426792 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmp7m\" (UniqueName: \"kubernetes.io/projected/0bdeb7c5-c26e-4de3-ae86-8520e5baf9fb-kube-api-access-nmp7m\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.426853 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bdeb7c5-c26e-4de3-ae86-8520e5baf9fb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.485242 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-p2qvq" Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.496995 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-112b-account-create-update-vwnq8" Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.506299 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8t48r" Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.528608 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4swc\" (UniqueName: \"kubernetes.io/projected/9635671b-a1ee-4374-8487-c492616a699b-kube-api-access-v4swc\") pod \"9635671b-a1ee-4374-8487-c492616a699b\" (UID: \"9635671b-a1ee-4374-8487-c492616a699b\") " Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.528706 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgjnm\" (UniqueName: \"kubernetes.io/projected/ec08c9a9-68e9-4615-9375-4511a84ea575-kube-api-access-lgjnm\") pod \"ec08c9a9-68e9-4615-9375-4511a84ea575\" (UID: \"ec08c9a9-68e9-4615-9375-4511a84ea575\") " Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.528759 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/509696b4-e17f-4d72-99d2-d2a800398fe6-operator-scripts\") pod \"509696b4-e17f-4d72-99d2-d2a800398fe6\" (UID: \"509696b4-e17f-4d72-99d2-d2a800398fe6\") " Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.528855 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sxsf\" (UniqueName: \"kubernetes.io/projected/509696b4-e17f-4d72-99d2-d2a800398fe6-kube-api-access-9sxsf\") pod \"509696b4-e17f-4d72-99d2-d2a800398fe6\" (UID: \"509696b4-e17f-4d72-99d2-d2a800398fe6\") " Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.529071 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9635671b-a1ee-4374-8487-c492616a699b-operator-scripts\") pod \"9635671b-a1ee-4374-8487-c492616a699b\" (UID: \"9635671b-a1ee-4374-8487-c492616a699b\") " Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.529159 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec08c9a9-68e9-4615-9375-4511a84ea575-operator-scripts\") pod \"ec08c9a9-68e9-4615-9375-4511a84ea575\" (UID: \"ec08c9a9-68e9-4615-9375-4511a84ea575\") " Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.530638 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec08c9a9-68e9-4615-9375-4511a84ea575-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ec08c9a9-68e9-4615-9375-4511a84ea575" (UID: "ec08c9a9-68e9-4615-9375-4511a84ea575"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.532016 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/509696b4-e17f-4d72-99d2-d2a800398fe6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "509696b4-e17f-4d72-99d2-d2a800398fe6" (UID: "509696b4-e17f-4d72-99d2-d2a800398fe6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.536963 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9635671b-a1ee-4374-8487-c492616a699b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9635671b-a1ee-4374-8487-c492616a699b" (UID: "9635671b-a1ee-4374-8487-c492616a699b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.539105 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec08c9a9-68e9-4615-9375-4511a84ea575-kube-api-access-lgjnm" (OuterVolumeSpecName: "kube-api-access-lgjnm") pod "ec08c9a9-68e9-4615-9375-4511a84ea575" (UID: "ec08c9a9-68e9-4615-9375-4511a84ea575"). InnerVolumeSpecName "kube-api-access-lgjnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.539322 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9635671b-a1ee-4374-8487-c492616a699b-kube-api-access-v4swc" (OuterVolumeSpecName: "kube-api-access-v4swc") pod "9635671b-a1ee-4374-8487-c492616a699b" (UID: "9635671b-a1ee-4374-8487-c492616a699b"). InnerVolumeSpecName "kube-api-access-v4swc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.544806 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/509696b4-e17f-4d72-99d2-d2a800398fe6-kube-api-access-9sxsf" (OuterVolumeSpecName: "kube-api-access-9sxsf") pod "509696b4-e17f-4d72-99d2-d2a800398fe6" (UID: "509696b4-e17f-4d72-99d2-d2a800398fe6"). InnerVolumeSpecName "kube-api-access-9sxsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.632715 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sxsf\" (UniqueName: \"kubernetes.io/projected/509696b4-e17f-4d72-99d2-d2a800398fe6-kube-api-access-9sxsf\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.632802 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9635671b-a1ee-4374-8487-c492616a699b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.632817 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec08c9a9-68e9-4615-9375-4511a84ea575-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.632830 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4swc\" (UniqueName: \"kubernetes.io/projected/9635671b-a1ee-4374-8487-c492616a699b-kube-api-access-v4swc\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.632844 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgjnm\" (UniqueName: \"kubernetes.io/projected/ec08c9a9-68e9-4615-9375-4511a84ea575-kube-api-access-lgjnm\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.632854 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/509696b4-e17f-4d72-99d2-d2a800398fe6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.653003 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8t48r" event={"ID":"509696b4-e17f-4d72-99d2-d2a800398fe6","Type":"ContainerDied","Data":"3b15c8fd23d09700a018b7cbfa9bf23aff754ee5f707e4804a75e8eed38dbf49"} Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.653063 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b15c8fd23d09700a018b7cbfa9bf23aff754ee5f707e4804a75e8eed38dbf49" Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.653143 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8t48r" Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.657743 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-p2qvq" Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.657761 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-p2qvq" event={"ID":"9635671b-a1ee-4374-8487-c492616a699b","Type":"ContainerDied","Data":"df13c28449f777ce4f216842124ee64760492fd84f213d7e14754532a1087823"} Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.657828 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df13c28449f777ce4f216842124ee64760492fd84f213d7e14754532a1087823" Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.664359 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3a73-account-create-update-tljwz" event={"ID":"0bdeb7c5-c26e-4de3-ae86-8520e5baf9fb","Type":"ContainerDied","Data":"5f7dbaf4d4033db918cc642a55971f4088fe1f4d38edb50e035199e3cde72ee1"} Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.664428 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f7dbaf4d4033db918cc642a55971f4088fe1f4d38edb50e035199e3cde72ee1" Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.664535 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3a73-account-create-update-tljwz" Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.669499 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-112b-account-create-update-vwnq8" event={"ID":"ec08c9a9-68e9-4615-9375-4511a84ea575","Type":"ContainerDied","Data":"6a6d691511b113f7e3f0bb4526e9baccdd57aefc9f4758cd20db5fc1c7d6baef"} Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.669567 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a6d691511b113f7e3f0bb4526e9baccdd57aefc9f4758cd20db5fc1c7d6baef" Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.669813 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-112b-account-create-update-vwnq8" Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.670172 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.677256 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Feb 16 13:53:11 crc kubenswrapper[4812]: I0216 13:53:11.897304 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70f011f3-3d76-41fd-bf20-f16a93df32b7" path="/var/lib/kubelet/pods/70f011f3-3d76-41fd-bf20-f16a93df32b7/volumes" Feb 16 13:53:13 crc kubenswrapper[4812]: I0216 13:53:13.552654 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-jbrfm" Feb 16 13:53:13 crc kubenswrapper[4812]: I0216 13:53:13.619135 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-gk6gl"] Feb 16 13:53:13 crc kubenswrapper[4812]: I0216 13:53:13.619564 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6c89d5d749-gk6gl" podUID="1630a179-1d00-4556-a867-72b31cd916fe" containerName="dnsmasq-dns" containerID="cri-o://5fa49f3d2408c9afc0b4541f429eec56463105e2959addded664399aac98a32a" gracePeriod=10 Feb 16 13:53:13 crc kubenswrapper[4812]: I0216 13:53:13.689634 4812 generic.go:334] "Generic (PLEG): container finished" podID="3e7d63b8-7d3a-4169-b939-2ea11895b53a" containerID="f5c4b9f533db50e25dcb57e3c193cbc1272645f8cc4f0269a5c0bdcf9345e8e0" exitCode=0 Feb 16 13:53:13 crc kubenswrapper[4812]: I0216 13:53:13.689680 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dkwvj" event={"ID":"3e7d63b8-7d3a-4169-b939-2ea11895b53a","Type":"ContainerDied","Data":"f5c4b9f533db50e25dcb57e3c193cbc1272645f8cc4f0269a5c0bdcf9345e8e0"} Feb 16 13:53:14 crc kubenswrapper[4812]: I0216 13:53:14.210186 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-gk6gl" Feb 16 13:53:14 crc kubenswrapper[4812]: I0216 13:53:14.305582 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1630a179-1d00-4556-a867-72b31cd916fe-ovsdbserver-sb\") pod \"1630a179-1d00-4556-a867-72b31cd916fe\" (UID: \"1630a179-1d00-4556-a867-72b31cd916fe\") " Feb 16 13:53:14 crc kubenswrapper[4812]: I0216 13:53:14.305782 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7nq9\" (UniqueName: \"kubernetes.io/projected/1630a179-1d00-4556-a867-72b31cd916fe-kube-api-access-j7nq9\") pod \"1630a179-1d00-4556-a867-72b31cd916fe\" (UID: \"1630a179-1d00-4556-a867-72b31cd916fe\") " Feb 16 13:53:14 crc kubenswrapper[4812]: I0216 13:53:14.305870 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1630a179-1d00-4556-a867-72b31cd916fe-dns-svc\") pod \"1630a179-1d00-4556-a867-72b31cd916fe\" (UID: \"1630a179-1d00-4556-a867-72b31cd916fe\") " Feb 16 13:53:14 crc kubenswrapper[4812]: I0216 13:53:14.305980 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1630a179-1d00-4556-a867-72b31cd916fe-config\") pod \"1630a179-1d00-4556-a867-72b31cd916fe\" (UID: \"1630a179-1d00-4556-a867-72b31cd916fe\") " Feb 16 13:53:14 crc kubenswrapper[4812]: I0216 13:53:14.328330 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1630a179-1d00-4556-a867-72b31cd916fe-kube-api-access-j7nq9" (OuterVolumeSpecName: "kube-api-access-j7nq9") pod "1630a179-1d00-4556-a867-72b31cd916fe" (UID: "1630a179-1d00-4556-a867-72b31cd916fe"). InnerVolumeSpecName "kube-api-access-j7nq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:53:14 crc kubenswrapper[4812]: I0216 13:53:14.369233 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1630a179-1d00-4556-a867-72b31cd916fe-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1630a179-1d00-4556-a867-72b31cd916fe" (UID: "1630a179-1d00-4556-a867-72b31cd916fe"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:53:14 crc kubenswrapper[4812]: I0216 13:53:14.369257 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1630a179-1d00-4556-a867-72b31cd916fe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1630a179-1d00-4556-a867-72b31cd916fe" (UID: "1630a179-1d00-4556-a867-72b31cd916fe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:53:14 crc kubenswrapper[4812]: I0216 13:53:14.374060 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1630a179-1d00-4556-a867-72b31cd916fe-config" (OuterVolumeSpecName: "config") pod "1630a179-1d00-4556-a867-72b31cd916fe" (UID: "1630a179-1d00-4556-a867-72b31cd916fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:53:14 crc kubenswrapper[4812]: I0216 13:53:14.407953 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7nq9\" (UniqueName: \"kubernetes.io/projected/1630a179-1d00-4556-a867-72b31cd916fe-kube-api-access-j7nq9\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:14 crc kubenswrapper[4812]: I0216 13:53:14.407994 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1630a179-1d00-4556-a867-72b31cd916fe-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:14 crc kubenswrapper[4812]: I0216 13:53:14.408010 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1630a179-1d00-4556-a867-72b31cd916fe-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:14 crc kubenswrapper[4812]: I0216 13:53:14.408020 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1630a179-1d00-4556-a867-72b31cd916fe-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:14 crc kubenswrapper[4812]: I0216 13:53:14.704847 4812 generic.go:334] "Generic (PLEG): container finished" podID="1630a179-1d00-4556-a867-72b31cd916fe" containerID="5fa49f3d2408c9afc0b4541f429eec56463105e2959addded664399aac98a32a" exitCode=0 Feb 16 13:53:14 crc kubenswrapper[4812]: I0216 13:53:14.704927 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-gk6gl" Feb 16 13:53:14 crc kubenswrapper[4812]: I0216 13:53:14.704933 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-gk6gl" event={"ID":"1630a179-1d00-4556-a867-72b31cd916fe","Type":"ContainerDied","Data":"5fa49f3d2408c9afc0b4541f429eec56463105e2959addded664399aac98a32a"} Feb 16 13:53:14 crc kubenswrapper[4812]: I0216 13:53:14.705006 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-gk6gl" event={"ID":"1630a179-1d00-4556-a867-72b31cd916fe","Type":"ContainerDied","Data":"30cb0a67ec125527c63e9ef1dde4956eb1a9d93651d1ebc2a1510ddf9302e07e"} Feb 16 13:53:14 crc kubenswrapper[4812]: I0216 13:53:14.705050 4812 scope.go:117] "RemoveContainer" containerID="5fa49f3d2408c9afc0b4541f429eec56463105e2959addded664399aac98a32a" Feb 16 13:53:14 crc kubenswrapper[4812]: I0216 13:53:14.777718 4812 scope.go:117] "RemoveContainer" containerID="94d9485495eb074fb35c657e5e83e08983103e37ea6ad8b30be801e90be5dbd4" Feb 16 13:53:14 crc kubenswrapper[4812]: I0216 13:53:14.851876 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-gk6gl"] Feb 16 13:53:14 crc kubenswrapper[4812]: I0216 13:53:14.858720 4812 scope.go:117] "RemoveContainer" containerID="5fa49f3d2408c9afc0b4541f429eec56463105e2959addded664399aac98a32a" Feb 16 13:53:14 crc kubenswrapper[4812]: E0216 13:53:14.869791 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fa49f3d2408c9afc0b4541f429eec56463105e2959addded664399aac98a32a\": container with ID starting with 5fa49f3d2408c9afc0b4541f429eec56463105e2959addded664399aac98a32a not found: ID does not exist" containerID="5fa49f3d2408c9afc0b4541f429eec56463105e2959addded664399aac98a32a" Feb 16 13:53:14 crc kubenswrapper[4812]: I0216 13:53:14.869874 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fa49f3d2408c9afc0b4541f429eec56463105e2959addded664399aac98a32a"} err="failed to get container status \"5fa49f3d2408c9afc0b4541f429eec56463105e2959addded664399aac98a32a\": rpc error: code = NotFound desc = could not find container \"5fa49f3d2408c9afc0b4541f429eec56463105e2959addded664399aac98a32a\": container with ID starting with 5fa49f3d2408c9afc0b4541f429eec56463105e2959addded664399aac98a32a not found: ID does not exist" Feb 16 13:53:14 crc kubenswrapper[4812]: I0216 13:53:14.869916 4812 scope.go:117] "RemoveContainer" containerID="94d9485495eb074fb35c657e5e83e08983103e37ea6ad8b30be801e90be5dbd4" Feb 16 13:53:14 crc kubenswrapper[4812]: E0216 13:53:14.872774 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94d9485495eb074fb35c657e5e83e08983103e37ea6ad8b30be801e90be5dbd4\": container with ID starting with 94d9485495eb074fb35c657e5e83e08983103e37ea6ad8b30be801e90be5dbd4 not found: ID does not exist" containerID="94d9485495eb074fb35c657e5e83e08983103e37ea6ad8b30be801e90be5dbd4" Feb 16 13:53:14 crc kubenswrapper[4812]: I0216 13:53:14.872836 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94d9485495eb074fb35c657e5e83e08983103e37ea6ad8b30be801e90be5dbd4"} err="failed to get container status \"94d9485495eb074fb35c657e5e83e08983103e37ea6ad8b30be801e90be5dbd4\": rpc error: code = NotFound desc = could not find container \"94d9485495eb074fb35c657e5e83e08983103e37ea6ad8b30be801e90be5dbd4\": container with ID starting with 94d9485495eb074fb35c657e5e83e08983103e37ea6ad8b30be801e90be5dbd4 not found: ID does not exist" Feb 16 13:53:14 crc kubenswrapper[4812]: I0216 13:53:14.877131 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-gk6gl"] Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.028855 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-bqvq2"] Feb 16 13:53:15 crc kubenswrapper[4812]: E0216 13:53:15.033733 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9635671b-a1ee-4374-8487-c492616a699b" containerName="mariadb-database-create" Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.033780 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="9635671b-a1ee-4374-8487-c492616a699b" containerName="mariadb-database-create" Feb 16 13:53:15 crc kubenswrapper[4812]: E0216 13:53:15.033821 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bdeb7c5-c26e-4de3-ae86-8520e5baf9fb" containerName="mariadb-account-create-update" Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.033839 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bdeb7c5-c26e-4de3-ae86-8520e5baf9fb" containerName="mariadb-account-create-update" Feb 16 13:53:15 crc kubenswrapper[4812]: E0216 13:53:15.033854 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="509696b4-e17f-4d72-99d2-d2a800398fe6" containerName="mariadb-database-create" Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.033862 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="509696b4-e17f-4d72-99d2-d2a800398fe6" containerName="mariadb-database-create" Feb 16 13:53:15 crc kubenswrapper[4812]: E0216 13:53:15.033884 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec08c9a9-68e9-4615-9375-4511a84ea575" containerName="mariadb-account-create-update" Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.033891 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec08c9a9-68e9-4615-9375-4511a84ea575" containerName="mariadb-account-create-update" Feb 16 13:53:15 crc kubenswrapper[4812]: E0216 13:53:15.033931 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1630a179-1d00-4556-a867-72b31cd916fe" containerName="init" Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.033939 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1630a179-1d00-4556-a867-72b31cd916fe" containerName="init" Feb 16 13:53:15 crc kubenswrapper[4812]: E0216 13:53:15.033954 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1630a179-1d00-4556-a867-72b31cd916fe" containerName="dnsmasq-dns" Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.033962 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1630a179-1d00-4556-a867-72b31cd916fe" containerName="dnsmasq-dns" Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.034393 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="509696b4-e17f-4d72-99d2-d2a800398fe6" containerName="mariadb-database-create" Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.034427 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec08c9a9-68e9-4615-9375-4511a84ea575" containerName="mariadb-account-create-update" Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.034439 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="9635671b-a1ee-4374-8487-c492616a699b" containerName="mariadb-database-create" Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.034476 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bdeb7c5-c26e-4de3-ae86-8520e5baf9fb" containerName="mariadb-account-create-update" Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.034489 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1630a179-1d00-4556-a867-72b31cd916fe" containerName="dnsmasq-dns" Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.035717 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bqvq2" Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.040155 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.045781 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-bqvq2"] Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.123773 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7tnv\" (UniqueName: \"kubernetes.io/projected/f5586976-e0b2-4971-9202-1804e20d413f-kube-api-access-q7tnv\") pod \"root-account-create-update-bqvq2\" (UID: \"f5586976-e0b2-4971-9202-1804e20d413f\") " pod="openstack/root-account-create-update-bqvq2" Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.124466 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5586976-e0b2-4971-9202-1804e20d413f-operator-scripts\") pod \"root-account-create-update-bqvq2\" (UID: \"f5586976-e0b2-4971-9202-1804e20d413f\") " pod="openstack/root-account-create-update-bqvq2" Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.226937 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5586976-e0b2-4971-9202-1804e20d413f-operator-scripts\") pod \"root-account-create-update-bqvq2\" (UID: \"f5586976-e0b2-4971-9202-1804e20d413f\") " pod="openstack/root-account-create-update-bqvq2" Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.227035 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7tnv\" (UniqueName: \"kubernetes.io/projected/f5586976-e0b2-4971-9202-1804e20d413f-kube-api-access-q7tnv\") pod \"root-account-create-update-bqvq2\" (UID: \"f5586976-e0b2-4971-9202-1804e20d413f\") " pod="openstack/root-account-create-update-bqvq2" Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.228321 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5586976-e0b2-4971-9202-1804e20d413f-operator-scripts\") pod \"root-account-create-update-bqvq2\" (UID: \"f5586976-e0b2-4971-9202-1804e20d413f\") " pod="openstack/root-account-create-update-bqvq2" Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.248784 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7tnv\" (UniqueName: \"kubernetes.io/projected/f5586976-e0b2-4971-9202-1804e20d413f-kube-api-access-q7tnv\") pod \"root-account-create-update-bqvq2\" (UID: \"f5586976-e0b2-4971-9202-1804e20d413f\") " pod="openstack/root-account-create-update-bqvq2" Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.307927 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dkwvj" Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.363892 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bqvq2" Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.435579 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9knxd\" (UniqueName: \"kubernetes.io/projected/3e7d63b8-7d3a-4169-b939-2ea11895b53a-kube-api-access-9knxd\") pod \"3e7d63b8-7d3a-4169-b939-2ea11895b53a\" (UID: \"3e7d63b8-7d3a-4169-b939-2ea11895b53a\") " Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.435681 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e7d63b8-7d3a-4169-b939-2ea11895b53a-scripts\") pod \"3e7d63b8-7d3a-4169-b939-2ea11895b53a\" (UID: \"3e7d63b8-7d3a-4169-b939-2ea11895b53a\") " Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.435856 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3e7d63b8-7d3a-4169-b939-2ea11895b53a-etc-swift\") pod \"3e7d63b8-7d3a-4169-b939-2ea11895b53a\" (UID: \"3e7d63b8-7d3a-4169-b939-2ea11895b53a\") " Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.435986 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3e7d63b8-7d3a-4169-b939-2ea11895b53a-ring-data-devices\") pod \"3e7d63b8-7d3a-4169-b939-2ea11895b53a\" (UID: \"3e7d63b8-7d3a-4169-b939-2ea11895b53a\") " Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.436038 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3e7d63b8-7d3a-4169-b939-2ea11895b53a-dispersionconf\") pod \"3e7d63b8-7d3a-4169-b939-2ea11895b53a\" (UID: \"3e7d63b8-7d3a-4169-b939-2ea11895b53a\") " Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.436139 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3e7d63b8-7d3a-4169-b939-2ea11895b53a-swiftconf\") pod \"3e7d63b8-7d3a-4169-b939-2ea11895b53a\" (UID: \"3e7d63b8-7d3a-4169-b939-2ea11895b53a\") " Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.436221 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e7d63b8-7d3a-4169-b939-2ea11895b53a-combined-ca-bundle\") pod \"3e7d63b8-7d3a-4169-b939-2ea11895b53a\" (UID: \"3e7d63b8-7d3a-4169-b939-2ea11895b53a\") " Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.438764 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e7d63b8-7d3a-4169-b939-2ea11895b53a-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "3e7d63b8-7d3a-4169-b939-2ea11895b53a" (UID: "3e7d63b8-7d3a-4169-b939-2ea11895b53a"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.439314 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e7d63b8-7d3a-4169-b939-2ea11895b53a-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "3e7d63b8-7d3a-4169-b939-2ea11895b53a" (UID: "3e7d63b8-7d3a-4169-b939-2ea11895b53a"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.443609 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e7d63b8-7d3a-4169-b939-2ea11895b53a-kube-api-access-9knxd" (OuterVolumeSpecName: "kube-api-access-9knxd") pod "3e7d63b8-7d3a-4169-b939-2ea11895b53a" (UID: "3e7d63b8-7d3a-4169-b939-2ea11895b53a"). InnerVolumeSpecName "kube-api-access-9knxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.457969 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e7d63b8-7d3a-4169-b939-2ea11895b53a-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "3e7d63b8-7d3a-4169-b939-2ea11895b53a" (UID: "3e7d63b8-7d3a-4169-b939-2ea11895b53a"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.465308 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e7d63b8-7d3a-4169-b939-2ea11895b53a-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "3e7d63b8-7d3a-4169-b939-2ea11895b53a" (UID: "3e7d63b8-7d3a-4169-b939-2ea11895b53a"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.467407 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e7d63b8-7d3a-4169-b939-2ea11895b53a-scripts" (OuterVolumeSpecName: "scripts") pod "3e7d63b8-7d3a-4169-b939-2ea11895b53a" (UID: "3e7d63b8-7d3a-4169-b939-2ea11895b53a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.471207 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e7d63b8-7d3a-4169-b939-2ea11895b53a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e7d63b8-7d3a-4169-b939-2ea11895b53a" (UID: "3e7d63b8-7d3a-4169-b939-2ea11895b53a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.539226 4812 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3e7d63b8-7d3a-4169-b939-2ea11895b53a-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.539277 4812 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3e7d63b8-7d3a-4169-b939-2ea11895b53a-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.539292 4812 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3e7d63b8-7d3a-4169-b939-2ea11895b53a-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.539302 4812 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3e7d63b8-7d3a-4169-b939-2ea11895b53a-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.539310 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e7d63b8-7d3a-4169-b939-2ea11895b53a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.539320 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9knxd\" (UniqueName: \"kubernetes.io/projected/3e7d63b8-7d3a-4169-b939-2ea11895b53a-kube-api-access-9knxd\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.539332 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e7d63b8-7d3a-4169-b939-2ea11895b53a-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.719638 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dkwvj" event={"ID":"3e7d63b8-7d3a-4169-b939-2ea11895b53a","Type":"ContainerDied","Data":"86fe0ab49e4c3456dbdc41888418520e84729d09ab738018ff4fd202c19acbc5"} Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.719706 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86fe0ab49e4c3456dbdc41888418520e84729d09ab738018ff4fd202c19acbc5" Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.719709 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dkwvj" Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.867782 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-bqvq2"] Feb 16 13:53:15 crc kubenswrapper[4812]: W0216 13:53:15.875140 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5586976_e0b2_4971_9202_1804e20d413f.slice/crio-f0dbcca5b945f5cd4837227d7a797ec267a8fc9bd5964457d94bd71b0e300a0e WatchSource:0}: Error finding container f0dbcca5b945f5cd4837227d7a797ec267a8fc9bd5964457d94bd71b0e300a0e: Status 404 returned error can't find the container with id f0dbcca5b945f5cd4837227d7a797ec267a8fc9bd5964457d94bd71b0e300a0e Feb 16 13:53:15 crc kubenswrapper[4812]: I0216 13:53:15.890991 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1630a179-1d00-4556-a867-72b31cd916fe" path="/var/lib/kubelet/pods/1630a179-1d00-4556-a867-72b31cd916fe/volumes" Feb 16 13:53:16 crc kubenswrapper[4812]: I0216 13:53:16.725040 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 16 13:53:16 crc kubenswrapper[4812]: I0216 13:53:16.737742 4812 generic.go:334] "Generic (PLEG): container finished" podID="f5586976-e0b2-4971-9202-1804e20d413f" containerID="bcdcba1c809c0bad5327178869ff05d9591b5020d8636ee0f44c09e18a3e9d03" exitCode=0 Feb 16 13:53:16 crc kubenswrapper[4812]: I0216 13:53:16.737786 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bqvq2" event={"ID":"f5586976-e0b2-4971-9202-1804e20d413f","Type":"ContainerDied","Data":"bcdcba1c809c0bad5327178869ff05d9591b5020d8636ee0f44c09e18a3e9d03"} Feb 16 13:53:16 crc kubenswrapper[4812]: I0216 13:53:16.737811 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bqvq2" event={"ID":"f5586976-e0b2-4971-9202-1804e20d413f","Type":"ContainerStarted","Data":"f0dbcca5b945f5cd4837227d7a797ec267a8fc9bd5964457d94bd71b0e300a0e"} Feb 16 13:53:17 crc kubenswrapper[4812]: I0216 13:53:17.583953 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-mwzf9"] Feb 16 13:53:17 crc kubenswrapper[4812]: E0216 13:53:17.585309 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e7d63b8-7d3a-4169-b939-2ea11895b53a" containerName="swift-ring-rebalance" Feb 16 13:53:17 crc kubenswrapper[4812]: I0216 13:53:17.585345 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e7d63b8-7d3a-4169-b939-2ea11895b53a" containerName="swift-ring-rebalance" Feb 16 13:53:17 crc kubenswrapper[4812]: I0216 13:53:17.585638 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e7d63b8-7d3a-4169-b939-2ea11895b53a" containerName="swift-ring-rebalance" Feb 16 13:53:17 crc kubenswrapper[4812]: I0216 13:53:17.586875 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mwzf9" Feb 16 13:53:17 crc kubenswrapper[4812]: I0216 13:53:17.589551 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 16 13:53:17 crc kubenswrapper[4812]: I0216 13:53:17.590491 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-v9qd8" Feb 16 13:53:17 crc kubenswrapper[4812]: I0216 13:53:17.601815 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-mwzf9"] Feb 16 13:53:17 crc kubenswrapper[4812]: I0216 13:53:17.688075 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/03e0b815-7641-435c-9934-05f5c5307962-db-sync-config-data\") pod \"glance-db-sync-mwzf9\" (UID: \"03e0b815-7641-435c-9934-05f5c5307962\") " pod="openstack/glance-db-sync-mwzf9" Feb 16 13:53:17 crc kubenswrapper[4812]: I0216 13:53:17.688158 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03e0b815-7641-435c-9934-05f5c5307962-combined-ca-bundle\") pod \"glance-db-sync-mwzf9\" (UID: \"03e0b815-7641-435c-9934-05f5c5307962\") " pod="openstack/glance-db-sync-mwzf9" Feb 16 13:53:17 crc kubenswrapper[4812]: I0216 13:53:17.688258 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03e0b815-7641-435c-9934-05f5c5307962-config-data\") pod \"glance-db-sync-mwzf9\" (UID: \"03e0b815-7641-435c-9934-05f5c5307962\") " pod="openstack/glance-db-sync-mwzf9" Feb 16 13:53:17 crc kubenswrapper[4812]: I0216 13:53:17.688351 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcc8t\" (UniqueName: \"kubernetes.io/projected/03e0b815-7641-435c-9934-05f5c5307962-kube-api-access-hcc8t\") pod \"glance-db-sync-mwzf9\" (UID: \"03e0b815-7641-435c-9934-05f5c5307962\") " pod="openstack/glance-db-sync-mwzf9" Feb 16 13:53:17 crc kubenswrapper[4812]: I0216 13:53:17.791161 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/03e0b815-7641-435c-9934-05f5c5307962-db-sync-config-data\") pod \"glance-db-sync-mwzf9\" (UID: \"03e0b815-7641-435c-9934-05f5c5307962\") " pod="openstack/glance-db-sync-mwzf9" Feb 16 13:53:17 crc kubenswrapper[4812]: I0216 13:53:17.791813 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03e0b815-7641-435c-9934-05f5c5307962-combined-ca-bundle\") pod \"glance-db-sync-mwzf9\" (UID: \"03e0b815-7641-435c-9934-05f5c5307962\") " pod="openstack/glance-db-sync-mwzf9" Feb 16 13:53:17 crc kubenswrapper[4812]: I0216 13:53:17.791912 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03e0b815-7641-435c-9934-05f5c5307962-config-data\") pod \"glance-db-sync-mwzf9\" (UID: \"03e0b815-7641-435c-9934-05f5c5307962\") " pod="openstack/glance-db-sync-mwzf9" Feb 16 13:53:17 crc kubenswrapper[4812]: I0216 13:53:17.791970 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcc8t\" (UniqueName: \"kubernetes.io/projected/03e0b815-7641-435c-9934-05f5c5307962-kube-api-access-hcc8t\") pod \"glance-db-sync-mwzf9\" (UID: \"03e0b815-7641-435c-9934-05f5c5307962\") " pod="openstack/glance-db-sync-mwzf9" Feb 16 13:53:17 crc kubenswrapper[4812]: I0216 13:53:17.800673 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03e0b815-7641-435c-9934-05f5c5307962-config-data\") pod \"glance-db-sync-mwzf9\" (UID: \"03e0b815-7641-435c-9934-05f5c5307962\") " pod="openstack/glance-db-sync-mwzf9" Feb 16 13:53:17 crc kubenswrapper[4812]: I0216 13:53:17.800736 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03e0b815-7641-435c-9934-05f5c5307962-combined-ca-bundle\") pod \"glance-db-sync-mwzf9\" (UID: \"03e0b815-7641-435c-9934-05f5c5307962\") " pod="openstack/glance-db-sync-mwzf9" Feb 16 13:53:17 crc kubenswrapper[4812]: I0216 13:53:17.810802 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/03e0b815-7641-435c-9934-05f5c5307962-db-sync-config-data\") pod \"glance-db-sync-mwzf9\" (UID: \"03e0b815-7641-435c-9934-05f5c5307962\") " pod="openstack/glance-db-sync-mwzf9" Feb 16 13:53:17 crc kubenswrapper[4812]: I0216 13:53:17.811744 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcc8t\" (UniqueName: \"kubernetes.io/projected/03e0b815-7641-435c-9934-05f5c5307962-kube-api-access-hcc8t\") pod \"glance-db-sync-mwzf9\" (UID: \"03e0b815-7641-435c-9934-05f5c5307962\") " pod="openstack/glance-db-sync-mwzf9" Feb 16 13:53:17 crc kubenswrapper[4812]: I0216 13:53:17.917736 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mwzf9" Feb 16 13:53:18 crc kubenswrapper[4812]: I0216 13:53:18.239320 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bqvq2" Feb 16 13:53:18 crc kubenswrapper[4812]: I0216 13:53:18.406988 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5586976-e0b2-4971-9202-1804e20d413f-operator-scripts\") pod \"f5586976-e0b2-4971-9202-1804e20d413f\" (UID: \"f5586976-e0b2-4971-9202-1804e20d413f\") " Feb 16 13:53:18 crc kubenswrapper[4812]: I0216 13:53:18.407069 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7tnv\" (UniqueName: \"kubernetes.io/projected/f5586976-e0b2-4971-9202-1804e20d413f-kube-api-access-q7tnv\") pod \"f5586976-e0b2-4971-9202-1804e20d413f\" (UID: \"f5586976-e0b2-4971-9202-1804e20d413f\") " Feb 16 13:53:18 crc kubenswrapper[4812]: I0216 13:53:18.407834 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5586976-e0b2-4971-9202-1804e20d413f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f5586976-e0b2-4971-9202-1804e20d413f" (UID: "f5586976-e0b2-4971-9202-1804e20d413f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:53:18 crc kubenswrapper[4812]: I0216 13:53:18.415743 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5586976-e0b2-4971-9202-1804e20d413f-kube-api-access-q7tnv" (OuterVolumeSpecName: "kube-api-access-q7tnv") pod "f5586976-e0b2-4971-9202-1804e20d413f" (UID: "f5586976-e0b2-4971-9202-1804e20d413f"). InnerVolumeSpecName "kube-api-access-q7tnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:53:18 crc kubenswrapper[4812]: I0216 13:53:18.510584 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5586976-e0b2-4971-9202-1804e20d413f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:18 crc kubenswrapper[4812]: I0216 13:53:18.510682 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7tnv\" (UniqueName: \"kubernetes.io/projected/f5586976-e0b2-4971-9202-1804e20d413f-kube-api-access-q7tnv\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:18 crc kubenswrapper[4812]: I0216 13:53:18.604904 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-mwzf9"] Feb 16 13:53:18 crc kubenswrapper[4812]: W0216 13:53:18.611698 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03e0b815_7641_435c_9934_05f5c5307962.slice/crio-3e0ff0760e8638fa5a6197fd3faaa765372a565ab5a53b4cd2e9dc3296c7d27b WatchSource:0}: Error finding container 3e0ff0760e8638fa5a6197fd3faaa765372a565ab5a53b4cd2e9dc3296c7d27b: Status 404 returned error can't find the container with id 3e0ff0760e8638fa5a6197fd3faaa765372a565ab5a53b4cd2e9dc3296c7d27b Feb 16 13:53:18 crc kubenswrapper[4812]: I0216 13:53:18.761113 4812 generic.go:334] "Generic (PLEG): container finished" podID="f00dce1e-5743-4129-b78b-4a29351da7ed" containerID="2f89168b209403e8c4aac4914ae81b28d57a9c2bf93aa494cb245990b3af7ba1" exitCode=0 Feb 16 13:53:18 crc kubenswrapper[4812]: I0216 13:53:18.761218 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f00dce1e-5743-4129-b78b-4a29351da7ed","Type":"ContainerDied","Data":"2f89168b209403e8c4aac4914ae81b28d57a9c2bf93aa494cb245990b3af7ba1"} Feb 16 13:53:18 crc kubenswrapper[4812]: I0216 13:53:18.763968 4812 generic.go:334] "Generic (PLEG): container finished" podID="aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1" containerID="824ff668f9c6354c85701d1e4c195c81446c13d5dd3370e5e033d693821ab5d1" exitCode=0 Feb 16 13:53:18 crc kubenswrapper[4812]: I0216 13:53:18.764084 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1","Type":"ContainerDied","Data":"824ff668f9c6354c85701d1e4c195c81446c13d5dd3370e5e033d693821ab5d1"} Feb 16 13:53:18 crc kubenswrapper[4812]: I0216 13:53:18.766020 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mwzf9" event={"ID":"03e0b815-7641-435c-9934-05f5c5307962","Type":"ContainerStarted","Data":"3e0ff0760e8638fa5a6197fd3faaa765372a565ab5a53b4cd2e9dc3296c7d27b"} Feb 16 13:53:18 crc kubenswrapper[4812]: I0216 13:53:18.778830 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bqvq2" event={"ID":"f5586976-e0b2-4971-9202-1804e20d413f","Type":"ContainerDied","Data":"f0dbcca5b945f5cd4837227d7a797ec267a8fc9bd5964457d94bd71b0e300a0e"} Feb 16 13:53:18 crc kubenswrapper[4812]: I0216 13:53:18.778915 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0dbcca5b945f5cd4837227d7a797ec267a8fc9bd5964457d94bd71b0e300a0e" Feb 16 13:53:18 crc kubenswrapper[4812]: I0216 13:53:18.778993 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bqvq2" Feb 16 13:53:19 crc kubenswrapper[4812]: I0216 13:53:19.808264 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f00dce1e-5743-4129-b78b-4a29351da7ed","Type":"ContainerStarted","Data":"d2a32bd5ab2fd55796d1f555ab03019bc8240ae6a6c931a57b64c683d0ec08e5"} Feb 16 13:53:19 crc kubenswrapper[4812]: I0216 13:53:19.809405 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:53:19 crc kubenswrapper[4812]: I0216 13:53:19.815799 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1","Type":"ContainerStarted","Data":"a1b9ba6c183e1dfbb476477890548b278bd32dffb554df570298eb0886094453"} Feb 16 13:53:19 crc kubenswrapper[4812]: I0216 13:53:19.816112 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 16 13:53:19 crc kubenswrapper[4812]: I0216 13:53:19.853934 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=46.99649737 podStartE2EDuration="1m17.853914361s" podCreationTimestamp="2026-02-16 13:52:02 +0000 UTC" firstStartedPulling="2026-02-16 13:52:04.774672207 +0000 UTC m=+1213.839002908" lastFinishedPulling="2026-02-16 13:52:35.632089198 +0000 UTC m=+1244.696419899" observedRunningTime="2026-02-16 13:53:19.850796022 +0000 UTC m=+1288.915126733" watchObservedRunningTime="2026-02-16 13:53:19.853914361 +0000 UTC m=+1288.918245062" Feb 16 13:53:19 crc kubenswrapper[4812]: I0216 13:53:19.893721 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371958.961077 podStartE2EDuration="1m17.893699298s" podCreationTimestamp="2026-02-16 13:52:02 +0000 UTC" firstStartedPulling="2026-02-16 13:52:04.745172947 +0000 UTC m=+1213.809503648" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:53:19.888431576 +0000 UTC m=+1288.952762307" watchObservedRunningTime="2026-02-16 13:53:19.893699298 +0000 UTC m=+1288.958029999" Feb 16 13:53:21 crc kubenswrapper[4812]: I0216 13:53:21.007494 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="51f12264-af08-4cf2-9e76-98dc91b0b7a8" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 13:53:22 crc kubenswrapper[4812]: I0216 13:53:22.208019 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7f34d582-3b55-4d2a-91b3-c64acd57981f-etc-swift\") pod \"swift-storage-0\" (UID: \"7f34d582-3b55-4d2a-91b3-c64acd57981f\") " pod="openstack/swift-storage-0" Feb 16 13:53:22 crc kubenswrapper[4812]: I0216 13:53:22.216413 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7f34d582-3b55-4d2a-91b3-c64acd57981f-etc-swift\") pod \"swift-storage-0\" (UID: \"7f34d582-3b55-4d2a-91b3-c64acd57981f\") " pod="openstack/swift-storage-0" Feb 16 13:53:22 crc kubenswrapper[4812]: I0216 13:53:22.384855 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 16 13:53:22 crc kubenswrapper[4812]: I0216 13:53:22.437246 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-7dzhm" podUID="2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70" containerName="ovn-controller" probeResult="failure" output=< Feb 16 13:53:22 crc kubenswrapper[4812]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 16 13:53:22 crc kubenswrapper[4812]: > Feb 16 13:53:22 crc kubenswrapper[4812]: I0216 13:53:22.466813 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-hjxr5" Feb 16 13:53:22 crc kubenswrapper[4812]: I0216 13:53:22.660993 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-hjxr5" Feb 16 13:53:23 crc kubenswrapper[4812]: I0216 13:53:23.258234 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-7dzhm-config-v8vrv"] Feb 16 13:53:23 crc kubenswrapper[4812]: E0216 13:53:23.259016 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5586976-e0b2-4971-9202-1804e20d413f" containerName="mariadb-account-create-update" Feb 16 13:53:23 crc kubenswrapper[4812]: I0216 13:53:23.259038 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5586976-e0b2-4971-9202-1804e20d413f" containerName="mariadb-account-create-update" Feb 16 13:53:23 crc kubenswrapper[4812]: I0216 13:53:23.259274 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5586976-e0b2-4971-9202-1804e20d413f" containerName="mariadb-account-create-update" Feb 16 13:53:23 crc kubenswrapper[4812]: I0216 13:53:23.261076 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7dzhm-config-v8vrv" Feb 16 13:53:23 crc kubenswrapper[4812]: I0216 13:53:23.267904 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 16 13:53:23 crc kubenswrapper[4812]: I0216 13:53:23.276126 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7dzhm-config-v8vrv"] Feb 16 13:53:23 crc kubenswrapper[4812]: I0216 13:53:23.334400 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/57dec6e7-2b03-417e-9aca-c535aabeba2a-var-run\") pod \"ovn-controller-7dzhm-config-v8vrv\" (UID: \"57dec6e7-2b03-417e-9aca-c535aabeba2a\") " pod="openstack/ovn-controller-7dzhm-config-v8vrv" Feb 16 13:53:23 crc kubenswrapper[4812]: I0216 13:53:23.334582 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/57dec6e7-2b03-417e-9aca-c535aabeba2a-additional-scripts\") pod \"ovn-controller-7dzhm-config-v8vrv\" (UID: \"57dec6e7-2b03-417e-9aca-c535aabeba2a\") " pod="openstack/ovn-controller-7dzhm-config-v8vrv" Feb 16 13:53:23 crc kubenswrapper[4812]: I0216 13:53:23.334614 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/57dec6e7-2b03-417e-9aca-c535aabeba2a-var-log-ovn\") pod \"ovn-controller-7dzhm-config-v8vrv\" (UID: \"57dec6e7-2b03-417e-9aca-c535aabeba2a\") " pod="openstack/ovn-controller-7dzhm-config-v8vrv" Feb 16 13:53:23 crc kubenswrapper[4812]: I0216 13:53:23.334687 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57dec6e7-2b03-417e-9aca-c535aabeba2a-scripts\") pod \"ovn-controller-7dzhm-config-v8vrv\" (UID: \"57dec6e7-2b03-417e-9aca-c535aabeba2a\") " pod="openstack/ovn-controller-7dzhm-config-v8vrv" Feb 16 13:53:23 crc kubenswrapper[4812]: I0216 13:53:23.334710 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv2xb\" (UniqueName: \"kubernetes.io/projected/57dec6e7-2b03-417e-9aca-c535aabeba2a-kube-api-access-lv2xb\") pod \"ovn-controller-7dzhm-config-v8vrv\" (UID: \"57dec6e7-2b03-417e-9aca-c535aabeba2a\") " pod="openstack/ovn-controller-7dzhm-config-v8vrv" Feb 16 13:53:23 crc kubenswrapper[4812]: I0216 13:53:23.334744 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/57dec6e7-2b03-417e-9aca-c535aabeba2a-var-run-ovn\") pod \"ovn-controller-7dzhm-config-v8vrv\" (UID: \"57dec6e7-2b03-417e-9aca-c535aabeba2a\") " pod="openstack/ovn-controller-7dzhm-config-v8vrv" Feb 16 13:53:23 crc kubenswrapper[4812]: I0216 13:53:23.337222 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 16 13:53:23 crc kubenswrapper[4812]: I0216 13:53:23.436139 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/57dec6e7-2b03-417e-9aca-c535aabeba2a-var-run-ovn\") pod \"ovn-controller-7dzhm-config-v8vrv\" (UID: \"57dec6e7-2b03-417e-9aca-c535aabeba2a\") " pod="openstack/ovn-controller-7dzhm-config-v8vrv" Feb 16 13:53:23 crc kubenswrapper[4812]: I0216 13:53:23.436198 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/57dec6e7-2b03-417e-9aca-c535aabeba2a-var-run\") pod \"ovn-controller-7dzhm-config-v8vrv\" (UID: \"57dec6e7-2b03-417e-9aca-c535aabeba2a\") " pod="openstack/ovn-controller-7dzhm-config-v8vrv" Feb 16 13:53:23 crc kubenswrapper[4812]: I0216 13:53:23.436305 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/57dec6e7-2b03-417e-9aca-c535aabeba2a-additional-scripts\") pod \"ovn-controller-7dzhm-config-v8vrv\" (UID: \"57dec6e7-2b03-417e-9aca-c535aabeba2a\") " pod="openstack/ovn-controller-7dzhm-config-v8vrv" Feb 16 13:53:23 crc kubenswrapper[4812]: I0216 13:53:23.436343 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/57dec6e7-2b03-417e-9aca-c535aabeba2a-var-log-ovn\") pod \"ovn-controller-7dzhm-config-v8vrv\" (UID: \"57dec6e7-2b03-417e-9aca-c535aabeba2a\") " pod="openstack/ovn-controller-7dzhm-config-v8vrv" Feb 16 13:53:23 crc kubenswrapper[4812]: I0216 13:53:23.436416 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57dec6e7-2b03-417e-9aca-c535aabeba2a-scripts\") pod \"ovn-controller-7dzhm-config-v8vrv\" (UID: \"57dec6e7-2b03-417e-9aca-c535aabeba2a\") " pod="openstack/ovn-controller-7dzhm-config-v8vrv" Feb 16 13:53:23 crc kubenswrapper[4812]: I0216 13:53:23.436435 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv2xb\" (UniqueName: \"kubernetes.io/projected/57dec6e7-2b03-417e-9aca-c535aabeba2a-kube-api-access-lv2xb\") pod \"ovn-controller-7dzhm-config-v8vrv\" (UID: \"57dec6e7-2b03-417e-9aca-c535aabeba2a\") " pod="openstack/ovn-controller-7dzhm-config-v8vrv" Feb 16 13:53:23 crc kubenswrapper[4812]: I0216 13:53:23.436752 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/57dec6e7-2b03-417e-9aca-c535aabeba2a-var-run\") pod \"ovn-controller-7dzhm-config-v8vrv\" (UID: \"57dec6e7-2b03-417e-9aca-c535aabeba2a\") " pod="openstack/ovn-controller-7dzhm-config-v8vrv" Feb 16 13:53:23 crc kubenswrapper[4812]: I0216 13:53:23.436774 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/57dec6e7-2b03-417e-9aca-c535aabeba2a-var-run-ovn\") pod \"ovn-controller-7dzhm-config-v8vrv\" (UID: \"57dec6e7-2b03-417e-9aca-c535aabeba2a\") " pod="openstack/ovn-controller-7dzhm-config-v8vrv" Feb 16 13:53:23 crc kubenswrapper[4812]: I0216 13:53:23.436838 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/57dec6e7-2b03-417e-9aca-c535aabeba2a-var-log-ovn\") pod \"ovn-controller-7dzhm-config-v8vrv\" (UID: \"57dec6e7-2b03-417e-9aca-c535aabeba2a\") " pod="openstack/ovn-controller-7dzhm-config-v8vrv" Feb 16 13:53:23 crc kubenswrapper[4812]: I0216 13:53:23.437912 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/57dec6e7-2b03-417e-9aca-c535aabeba2a-additional-scripts\") pod \"ovn-controller-7dzhm-config-v8vrv\" (UID: \"57dec6e7-2b03-417e-9aca-c535aabeba2a\") " pod="openstack/ovn-controller-7dzhm-config-v8vrv" Feb 16 13:53:23 crc kubenswrapper[4812]: I0216 13:53:23.440045 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57dec6e7-2b03-417e-9aca-c535aabeba2a-scripts\") pod \"ovn-controller-7dzhm-config-v8vrv\" (UID: \"57dec6e7-2b03-417e-9aca-c535aabeba2a\") " pod="openstack/ovn-controller-7dzhm-config-v8vrv" Feb 16 13:53:23 crc kubenswrapper[4812]: I0216 13:53:23.469289 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv2xb\" (UniqueName: \"kubernetes.io/projected/57dec6e7-2b03-417e-9aca-c535aabeba2a-kube-api-access-lv2xb\") pod \"ovn-controller-7dzhm-config-v8vrv\" (UID: \"57dec6e7-2b03-417e-9aca-c535aabeba2a\") " pod="openstack/ovn-controller-7dzhm-config-v8vrv" Feb 16 13:53:23 crc kubenswrapper[4812]: I0216 13:53:23.591959 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7dzhm-config-v8vrv" Feb 16 13:53:24 crc kubenswrapper[4812]: I0216 13:53:23.874762 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7f34d582-3b55-4d2a-91b3-c64acd57981f","Type":"ContainerStarted","Data":"a8c38cd622af27faa095d6a783705987fcd3c5792af3fdeaf6805c6f2ff7cc3b"} Feb 16 13:53:24 crc kubenswrapper[4812]: I0216 13:53:24.921356 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7dzhm-config-v8vrv"] Feb 16 13:53:24 crc kubenswrapper[4812]: W0216 13:53:24.972485 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57dec6e7_2b03_417e_9aca_c535aabeba2a.slice/crio-105f892b5cb7358e9a4a517ceebed742e6734c067ba3a71d1b35fac0a845d70a WatchSource:0}: Error finding container 105f892b5cb7358e9a4a517ceebed742e6734c067ba3a71d1b35fac0a845d70a: Status 404 returned error can't find the container with id 105f892b5cb7358e9a4a517ceebed742e6734c067ba3a71d1b35fac0a845d70a Feb 16 13:53:25 crc kubenswrapper[4812]: I0216 13:53:25.905502 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7dzhm-config-v8vrv" event={"ID":"57dec6e7-2b03-417e-9aca-c535aabeba2a","Type":"ContainerStarted","Data":"5488a4fd597a3432eeae73392d5709fbf6bab4e90c2dcef7da2708de25c2d98e"} Feb 16 13:53:25 crc kubenswrapper[4812]: I0216 13:53:25.905886 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7dzhm-config-v8vrv" event={"ID":"57dec6e7-2b03-417e-9aca-c535aabeba2a","Type":"ContainerStarted","Data":"105f892b5cb7358e9a4a517ceebed742e6734c067ba3a71d1b35fac0a845d70a"} Feb 16 13:53:25 crc kubenswrapper[4812]: I0216 13:53:25.914048 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7f34d582-3b55-4d2a-91b3-c64acd57981f","Type":"ContainerStarted","Data":"5e31c8759c46f0333d3a600d3277837cced07195bc921131515c00c52dd54ff0"} Feb 16 13:53:25 crc kubenswrapper[4812]: I0216 13:53:25.914100 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7f34d582-3b55-4d2a-91b3-c64acd57981f","Type":"ContainerStarted","Data":"4233ea8fcad19025d0de8f3bfd76492ef026c7c5e156685c15c8cc21663b011b"} Feb 16 13:53:25 crc kubenswrapper[4812]: I0216 13:53:25.914112 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7f34d582-3b55-4d2a-91b3-c64acd57981f","Type":"ContainerStarted","Data":"e77b541548561381f8fa0ee1b9a3048c3b52b87997e742a6b01123a956edce00"} Feb 16 13:53:26 crc kubenswrapper[4812]: I0216 13:53:26.931504 4812 generic.go:334] "Generic (PLEG): container finished" podID="57dec6e7-2b03-417e-9aca-c535aabeba2a" containerID="5488a4fd597a3432eeae73392d5709fbf6bab4e90c2dcef7da2708de25c2d98e" exitCode=0 Feb 16 13:53:26 crc kubenswrapper[4812]: I0216 13:53:26.931586 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7dzhm-config-v8vrv" event={"ID":"57dec6e7-2b03-417e-9aca-c535aabeba2a","Type":"ContainerDied","Data":"5488a4fd597a3432eeae73392d5709fbf6bab4e90c2dcef7da2708de25c2d98e"} Feb 16 13:53:26 crc kubenswrapper[4812]: I0216 13:53:26.936960 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7f34d582-3b55-4d2a-91b3-c64acd57981f","Type":"ContainerStarted","Data":"adf88c70933dace85253651ca6f9245d03e81c8d3dbce3125f86267125723c0e"} Feb 16 13:53:27 crc kubenswrapper[4812]: I0216 13:53:27.421685 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-7dzhm" Feb 16 13:53:31 crc kubenswrapper[4812]: I0216 13:53:31.132206 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="51f12264-af08-4cf2-9e76-98dc91b0b7a8" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 13:53:33 crc kubenswrapper[4812]: I0216 13:53:33.896003 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 16 13:53:33 crc kubenswrapper[4812]: I0216 13:53:33.990639 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.372675 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-cbl5p"] Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.375801 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-cbl5p" Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.398317 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-cbl5p"] Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.413928 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-466e-account-create-update-22bqw"] Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.415672 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-466e-account-create-update-22bqw" Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.423802 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.449593 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-466e-account-create-update-22bqw"] Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.518713 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-create-p8djg"] Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.520377 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-p8djg" Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.536052 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-p8djg"] Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.560979 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spg87\" (UniqueName: \"kubernetes.io/projected/870350e2-3c24-4788-afb1-8d5a4d77172e-kube-api-access-spg87\") pod \"cinder-db-create-cbl5p\" (UID: \"870350e2-3c24-4788-afb1-8d5a4d77172e\") " pod="openstack/cinder-db-create-cbl5p" Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.561024 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8spcw\" (UniqueName: \"kubernetes.io/projected/a96961c7-e6f3-4cbc-8498-b9e5f023ad2d-kube-api-access-8spcw\") pod \"cloudkitty-db-create-p8djg\" (UID: \"a96961c7-e6f3-4cbc-8498-b9e5f023ad2d\") " pod="openstack/cloudkitty-db-create-p8djg" Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.561288 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a96961c7-e6f3-4cbc-8498-b9e5f023ad2d-operator-scripts\") pod \"cloudkitty-db-create-p8djg\" (UID: \"a96961c7-e6f3-4cbc-8498-b9e5f023ad2d\") " pod="openstack/cloudkitty-db-create-p8djg" Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.561400 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fde98520-6555-417b-851c-14dccde518ad-operator-scripts\") pod \"cinder-466e-account-create-update-22bqw\" (UID: \"fde98520-6555-417b-851c-14dccde518ad\") " pod="openstack/cinder-466e-account-create-update-22bqw" Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.561464 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/870350e2-3c24-4788-afb1-8d5a4d77172e-operator-scripts\") pod \"cinder-db-create-cbl5p\" (UID: \"870350e2-3c24-4788-afb1-8d5a4d77172e\") " pod="openstack/cinder-db-create-cbl5p" Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.561526 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpfw5\" (UniqueName: \"kubernetes.io/projected/fde98520-6555-417b-851c-14dccde518ad-kube-api-access-kpfw5\") pod \"cinder-466e-account-create-update-22bqw\" (UID: \"fde98520-6555-417b-851c-14dccde518ad\") " pod="openstack/cinder-466e-account-create-update-22bqw" Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.662928 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a96961c7-e6f3-4cbc-8498-b9e5f023ad2d-operator-scripts\") pod \"cloudkitty-db-create-p8djg\" (UID: \"a96961c7-e6f3-4cbc-8498-b9e5f023ad2d\") " pod="openstack/cloudkitty-db-create-p8djg" Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.663011 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fde98520-6555-417b-851c-14dccde518ad-operator-scripts\") pod \"cinder-466e-account-create-update-22bqw\" (UID: \"fde98520-6555-417b-851c-14dccde518ad\") " pod="openstack/cinder-466e-account-create-update-22bqw" Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.663037 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/870350e2-3c24-4788-afb1-8d5a4d77172e-operator-scripts\") pod \"cinder-db-create-cbl5p\" (UID: \"870350e2-3c24-4788-afb1-8d5a4d77172e\") " pod="openstack/cinder-db-create-cbl5p" Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.663065 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpfw5\" (UniqueName: \"kubernetes.io/projected/fde98520-6555-417b-851c-14dccde518ad-kube-api-access-kpfw5\") pod \"cinder-466e-account-create-update-22bqw\" (UID: \"fde98520-6555-417b-851c-14dccde518ad\") " pod="openstack/cinder-466e-account-create-update-22bqw" Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.663127 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spg87\" (UniqueName: \"kubernetes.io/projected/870350e2-3c24-4788-afb1-8d5a4d77172e-kube-api-access-spg87\") pod \"cinder-db-create-cbl5p\" (UID: \"870350e2-3c24-4788-afb1-8d5a4d77172e\") " pod="openstack/cinder-db-create-cbl5p" Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.663144 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8spcw\" (UniqueName: \"kubernetes.io/projected/a96961c7-e6f3-4cbc-8498-b9e5f023ad2d-kube-api-access-8spcw\") pod \"cloudkitty-db-create-p8djg\" (UID: \"a96961c7-e6f3-4cbc-8498-b9e5f023ad2d\") " pod="openstack/cloudkitty-db-create-p8djg" Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.664212 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fde98520-6555-417b-851c-14dccde518ad-operator-scripts\") pod \"cinder-466e-account-create-update-22bqw\" (UID: \"fde98520-6555-417b-851c-14dccde518ad\") " pod="openstack/cinder-466e-account-create-update-22bqw" Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.664355 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/870350e2-3c24-4788-afb1-8d5a4d77172e-operator-scripts\") pod \"cinder-db-create-cbl5p\" (UID: \"870350e2-3c24-4788-afb1-8d5a4d77172e\") " pod="openstack/cinder-db-create-cbl5p" Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.664634 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a96961c7-e6f3-4cbc-8498-b9e5f023ad2d-operator-scripts\") pod \"cloudkitty-db-create-p8djg\" (UID: \"a96961c7-e6f3-4cbc-8498-b9e5f023ad2d\") " pod="openstack/cloudkitty-db-create-p8djg" Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.676292 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-eed8-account-create-update-zs2zn"] Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.681282 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-eed8-account-create-update-zs2zn" Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.689388 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-db-secret" Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.715343 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spg87\" (UniqueName: \"kubernetes.io/projected/870350e2-3c24-4788-afb1-8d5a4d77172e-kube-api-access-spg87\") pod \"cinder-db-create-cbl5p\" (UID: \"870350e2-3c24-4788-afb1-8d5a4d77172e\") " pod="openstack/cinder-db-create-cbl5p" Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.718134 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpfw5\" (UniqueName: \"kubernetes.io/projected/fde98520-6555-417b-851c-14dccde518ad-kube-api-access-kpfw5\") pod \"cinder-466e-account-create-update-22bqw\" (UID: \"fde98520-6555-417b-851c-14dccde518ad\") " pod="openstack/cinder-466e-account-create-update-22bqw" Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.719571 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8spcw\" (UniqueName: \"kubernetes.io/projected/a96961c7-e6f3-4cbc-8498-b9e5f023ad2d-kube-api-access-8spcw\") pod \"cloudkitty-db-create-p8djg\" (UID: \"a96961c7-e6f3-4cbc-8498-b9e5f023ad2d\") " pod="openstack/cloudkitty-db-create-p8djg" Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.724858 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-eed8-account-create-update-zs2zn"] Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.744485 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-466e-account-create-update-22bqw" Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.775871 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49d92a9c-6e64-409e-a324-0061b9b451d0-operator-scripts\") pod \"cloudkitty-eed8-account-create-update-zs2zn\" (UID: \"49d92a9c-6e64-409e-a324-0061b9b451d0\") " pod="openstack/cloudkitty-eed8-account-create-update-zs2zn" Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.776045 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2slv\" (UniqueName: \"kubernetes.io/projected/49d92a9c-6e64-409e-a324-0061b9b451d0-kube-api-access-v2slv\") pod \"cloudkitty-eed8-account-create-update-zs2zn\" (UID: \"49d92a9c-6e64-409e-a324-0061b9b451d0\") " pod="openstack/cloudkitty-eed8-account-create-update-zs2zn" Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.808155 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-mjkrp"] Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.810033 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mjkrp" Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.824242 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-b95b-account-create-update-bhdws"] Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.825980 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-b95b-account-create-update-bhdws" Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.835177 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-b95b-account-create-update-bhdws"] Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.841939 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.851821 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-mjkrp"] Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.867116 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-c7brg"] Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.869378 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-c7brg" Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.878193 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.878664 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-vtcqc" Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.878888 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.879063 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.882230 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzxxl\" (UniqueName: \"kubernetes.io/projected/c5d00fb2-e93c-4b84-b307-f322137b1be4-kube-api-access-xzxxl\") pod \"barbican-db-create-mjkrp\" (UID: \"c5d00fb2-e93c-4b84-b307-f322137b1be4\") " pod="openstack/barbican-db-create-mjkrp" Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.882307 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgf5m\" (UniqueName: \"kubernetes.io/projected/3aee3570-b33b-4898-ad56-62202a1dd25b-kube-api-access-bgf5m\") pod \"barbican-b95b-account-create-update-bhdws\" (UID: \"3aee3570-b33b-4898-ad56-62202a1dd25b\") " pod="openstack/barbican-b95b-account-create-update-bhdws" Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.882386 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2slv\" (UniqueName: \"kubernetes.io/projected/49d92a9c-6e64-409e-a324-0061b9b451d0-kube-api-access-v2slv\") pod \"cloudkitty-eed8-account-create-update-zs2zn\" (UID: \"49d92a9c-6e64-409e-a324-0061b9b451d0\") " pod="openstack/cloudkitty-eed8-account-create-update-zs2zn" Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.882419 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5d00fb2-e93c-4b84-b307-f322137b1be4-operator-scripts\") pod \"barbican-db-create-mjkrp\" (UID: \"c5d00fb2-e93c-4b84-b307-f322137b1be4\") " pod="openstack/barbican-db-create-mjkrp" Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.887855 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3aee3570-b33b-4898-ad56-62202a1dd25b-operator-scripts\") pod \"barbican-b95b-account-create-update-bhdws\" (UID: \"3aee3570-b33b-4898-ad56-62202a1dd25b\") " pod="openstack/barbican-b95b-account-create-update-bhdws" Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.888166 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49d92a9c-6e64-409e-a324-0061b9b451d0-operator-scripts\") pod \"cloudkitty-eed8-account-create-update-zs2zn\" (UID: \"49d92a9c-6e64-409e-a324-0061b9b451d0\") " pod="openstack/cloudkitty-eed8-account-create-update-zs2zn" Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.889647 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49d92a9c-6e64-409e-a324-0061b9b451d0-operator-scripts\") pod \"cloudkitty-eed8-account-create-update-zs2zn\" (UID: \"49d92a9c-6e64-409e-a324-0061b9b451d0\") " pod="openstack/cloudkitty-eed8-account-create-update-zs2zn" Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.894419 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-p8djg" Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.919931 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-c7brg"] Feb 16 13:53:34 crc kubenswrapper[4812]: I0216 13:53:34.987433 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2slv\" (UniqueName: \"kubernetes.io/projected/49d92a9c-6e64-409e-a324-0061b9b451d0-kube-api-access-v2slv\") pod \"cloudkitty-eed8-account-create-update-zs2zn\" (UID: \"49d92a9c-6e64-409e-a324-0061b9b451d0\") " pod="openstack/cloudkitty-eed8-account-create-update-zs2zn" Feb 16 13:53:35 crc kubenswrapper[4812]: I0216 13:53:35.013726 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7s5k\" (UniqueName: \"kubernetes.io/projected/919aaed2-0230-4b07-aea8-fb57e6917cff-kube-api-access-k7s5k\") pod \"keystone-db-sync-c7brg\" (UID: \"919aaed2-0230-4b07-aea8-fb57e6917cff\") " pod="openstack/keystone-db-sync-c7brg" Feb 16 13:53:35 crc kubenswrapper[4812]: I0216 13:53:35.013818 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzxxl\" (UniqueName: \"kubernetes.io/projected/c5d00fb2-e93c-4b84-b307-f322137b1be4-kube-api-access-xzxxl\") pod \"barbican-db-create-mjkrp\" (UID: \"c5d00fb2-e93c-4b84-b307-f322137b1be4\") " pod="openstack/barbican-db-create-mjkrp" Feb 16 13:53:35 crc kubenswrapper[4812]: I0216 13:53:35.013879 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgf5m\" (UniqueName: \"kubernetes.io/projected/3aee3570-b33b-4898-ad56-62202a1dd25b-kube-api-access-bgf5m\") pod \"barbican-b95b-account-create-update-bhdws\" (UID: \"3aee3570-b33b-4898-ad56-62202a1dd25b\") " pod="openstack/barbican-b95b-account-create-update-bhdws" Feb 16 13:53:35 crc kubenswrapper[4812]: I0216 13:53:35.013911 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5d00fb2-e93c-4b84-b307-f322137b1be4-operator-scripts\") pod \"barbican-db-create-mjkrp\" (UID: \"c5d00fb2-e93c-4b84-b307-f322137b1be4\") " pod="openstack/barbican-db-create-mjkrp" Feb 16 13:53:35 crc kubenswrapper[4812]: I0216 13:53:35.013954 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/919aaed2-0230-4b07-aea8-fb57e6917cff-config-data\") pod \"keystone-db-sync-c7brg\" (UID: \"919aaed2-0230-4b07-aea8-fb57e6917cff\") " pod="openstack/keystone-db-sync-c7brg" Feb 16 13:53:35 crc kubenswrapper[4812]: I0216 13:53:35.013989 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3aee3570-b33b-4898-ad56-62202a1dd25b-operator-scripts\") pod \"barbican-b95b-account-create-update-bhdws\" (UID: \"3aee3570-b33b-4898-ad56-62202a1dd25b\") " pod="openstack/barbican-b95b-account-create-update-bhdws" Feb 16 13:53:35 crc kubenswrapper[4812]: I0216 13:53:35.014254 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/919aaed2-0230-4b07-aea8-fb57e6917cff-combined-ca-bundle\") pod \"keystone-db-sync-c7brg\" (UID: \"919aaed2-0230-4b07-aea8-fb57e6917cff\") " pod="openstack/keystone-db-sync-c7brg" Feb 16 13:53:35 crc kubenswrapper[4812]: I0216 13:53:35.015848 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5d00fb2-e93c-4b84-b307-f322137b1be4-operator-scripts\") pod \"barbican-db-create-mjkrp\" (UID: \"c5d00fb2-e93c-4b84-b307-f322137b1be4\") " pod="openstack/barbican-db-create-mjkrp" Feb 16 13:53:35 crc kubenswrapper[4812]: I0216 13:53:35.016395 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-cbl5p" Feb 16 13:53:35 crc kubenswrapper[4812]: I0216 13:53:35.017756 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-eed8-account-create-update-zs2zn" Feb 16 13:53:35 crc kubenswrapper[4812]: I0216 13:53:35.017794 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3aee3570-b33b-4898-ad56-62202a1dd25b-operator-scripts\") pod \"barbican-b95b-account-create-update-bhdws\" (UID: \"3aee3570-b33b-4898-ad56-62202a1dd25b\") " pod="openstack/barbican-b95b-account-create-update-bhdws" Feb 16 13:53:35 crc kubenswrapper[4812]: I0216 13:53:35.067768 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzxxl\" (UniqueName: \"kubernetes.io/projected/c5d00fb2-e93c-4b84-b307-f322137b1be4-kube-api-access-xzxxl\") pod \"barbican-db-create-mjkrp\" (UID: \"c5d00fb2-e93c-4b84-b307-f322137b1be4\") " pod="openstack/barbican-db-create-mjkrp" Feb 16 13:53:35 crc kubenswrapper[4812]: I0216 13:53:35.071140 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgf5m\" (UniqueName: \"kubernetes.io/projected/3aee3570-b33b-4898-ad56-62202a1dd25b-kube-api-access-bgf5m\") pod \"barbican-b95b-account-create-update-bhdws\" (UID: \"3aee3570-b33b-4898-ad56-62202a1dd25b\") " pod="openstack/barbican-b95b-account-create-update-bhdws" Feb 16 13:53:35 crc kubenswrapper[4812]: I0216 13:53:35.114575 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-x869h"] Feb 16 13:53:35 crc kubenswrapper[4812]: I0216 13:53:35.116565 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-x869h" Feb 16 13:53:35 crc kubenswrapper[4812]: I0216 13:53:35.121306 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7s5k\" (UniqueName: \"kubernetes.io/projected/919aaed2-0230-4b07-aea8-fb57e6917cff-kube-api-access-k7s5k\") pod \"keystone-db-sync-c7brg\" (UID: \"919aaed2-0230-4b07-aea8-fb57e6917cff\") " pod="openstack/keystone-db-sync-c7brg" Feb 16 13:53:35 crc kubenswrapper[4812]: I0216 13:53:35.121429 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/919aaed2-0230-4b07-aea8-fb57e6917cff-config-data\") pod \"keystone-db-sync-c7brg\" (UID: \"919aaed2-0230-4b07-aea8-fb57e6917cff\") " pod="openstack/keystone-db-sync-c7brg" Feb 16 13:53:35 crc kubenswrapper[4812]: I0216 13:53:35.121560 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/919aaed2-0230-4b07-aea8-fb57e6917cff-combined-ca-bundle\") pod \"keystone-db-sync-c7brg\" (UID: \"919aaed2-0230-4b07-aea8-fb57e6917cff\") " pod="openstack/keystone-db-sync-c7brg" Feb 16 13:53:35 crc kubenswrapper[4812]: I0216 13:53:35.126733 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/919aaed2-0230-4b07-aea8-fb57e6917cff-config-data\") pod \"keystone-db-sync-c7brg\" (UID: \"919aaed2-0230-4b07-aea8-fb57e6917cff\") " pod="openstack/keystone-db-sync-c7brg" Feb 16 13:53:35 crc kubenswrapper[4812]: I0216 13:53:35.132650 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/919aaed2-0230-4b07-aea8-fb57e6917cff-combined-ca-bundle\") pod \"keystone-db-sync-c7brg\" (UID: \"919aaed2-0230-4b07-aea8-fb57e6917cff\") " pod="openstack/keystone-db-sync-c7brg" Feb 16 13:53:35 crc kubenswrapper[4812]: I0216 13:53:35.148335 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mjkrp" Feb 16 13:53:35 crc kubenswrapper[4812]: I0216 13:53:35.150082 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7s5k\" (UniqueName: \"kubernetes.io/projected/919aaed2-0230-4b07-aea8-fb57e6917cff-kube-api-access-k7s5k\") pod \"keystone-db-sync-c7brg\" (UID: \"919aaed2-0230-4b07-aea8-fb57e6917cff\") " pod="openstack/keystone-db-sync-c7brg" Feb 16 13:53:35 crc kubenswrapper[4812]: I0216 13:53:35.164938 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-b95b-account-create-update-bhdws" Feb 16 13:53:35 crc kubenswrapper[4812]: I0216 13:53:35.183529 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-x869h"] Feb 16 13:53:35 crc kubenswrapper[4812]: I0216 13:53:35.211300 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-c7brg" Feb 16 13:53:35 crc kubenswrapper[4812]: I0216 13:53:35.216193 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-08d4-account-create-update-846z4"] Feb 16 13:53:35 crc kubenswrapper[4812]: I0216 13:53:35.218020 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-08d4-account-create-update-846z4" Feb 16 13:53:35 crc kubenswrapper[4812]: I0216 13:53:35.224603 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x8bq\" (UniqueName: \"kubernetes.io/projected/4445f438-ce8c-4014-ad6f-b892beed381a-kube-api-access-6x8bq\") pod \"neutron-db-create-x869h\" (UID: \"4445f438-ce8c-4014-ad6f-b892beed381a\") " pod="openstack/neutron-db-create-x869h" Feb 16 13:53:35 crc kubenswrapper[4812]: I0216 13:53:35.224780 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4445f438-ce8c-4014-ad6f-b892beed381a-operator-scripts\") pod \"neutron-db-create-x869h\" (UID: \"4445f438-ce8c-4014-ad6f-b892beed381a\") " pod="openstack/neutron-db-create-x869h" Feb 16 13:53:35 crc kubenswrapper[4812]: I0216 13:53:35.228936 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 16 13:53:35 crc kubenswrapper[4812]: I0216 13:53:35.251941 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-08d4-account-create-update-846z4"] Feb 16 13:53:35 crc kubenswrapper[4812]: I0216 13:53:35.326358 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/862482a0-fe2f-481c-a819-4539a198dc9d-operator-scripts\") pod \"neutron-08d4-account-create-update-846z4\" (UID: \"862482a0-fe2f-481c-a819-4539a198dc9d\") " pod="openstack/neutron-08d4-account-create-update-846z4" Feb 16 13:53:35 crc kubenswrapper[4812]: I0216 13:53:35.326483 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4445f438-ce8c-4014-ad6f-b892beed381a-operator-scripts\") pod \"neutron-db-create-x869h\" (UID: \"4445f438-ce8c-4014-ad6f-b892beed381a\") " pod="openstack/neutron-db-create-x869h" Feb 16 13:53:35 crc kubenswrapper[4812]: I0216 13:53:35.326649 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9twwj\" (UniqueName: \"kubernetes.io/projected/862482a0-fe2f-481c-a819-4539a198dc9d-kube-api-access-9twwj\") pod \"neutron-08d4-account-create-update-846z4\" (UID: \"862482a0-fe2f-481c-a819-4539a198dc9d\") " pod="openstack/neutron-08d4-account-create-update-846z4" Feb 16 13:53:35 crc kubenswrapper[4812]: I0216 13:53:35.326696 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x8bq\" (UniqueName: \"kubernetes.io/projected/4445f438-ce8c-4014-ad6f-b892beed381a-kube-api-access-6x8bq\") pod \"neutron-db-create-x869h\" (UID: \"4445f438-ce8c-4014-ad6f-b892beed381a\") " pod="openstack/neutron-db-create-x869h" Feb 16 13:53:35 crc kubenswrapper[4812]: I0216 13:53:35.327914 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4445f438-ce8c-4014-ad6f-b892beed381a-operator-scripts\") pod \"neutron-db-create-x869h\" (UID: \"4445f438-ce8c-4014-ad6f-b892beed381a\") " pod="openstack/neutron-db-create-x869h" Feb 16 13:53:35 crc kubenswrapper[4812]: I0216 13:53:35.360240 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x8bq\" (UniqueName: \"kubernetes.io/projected/4445f438-ce8c-4014-ad6f-b892beed381a-kube-api-access-6x8bq\") pod \"neutron-db-create-x869h\" (UID: \"4445f438-ce8c-4014-ad6f-b892beed381a\") " pod="openstack/neutron-db-create-x869h" Feb 16 13:53:35 crc kubenswrapper[4812]: I0216 13:53:35.428375 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9twwj\" (UniqueName: \"kubernetes.io/projected/862482a0-fe2f-481c-a819-4539a198dc9d-kube-api-access-9twwj\") pod \"neutron-08d4-account-create-update-846z4\" (UID: \"862482a0-fe2f-481c-a819-4539a198dc9d\") " pod="openstack/neutron-08d4-account-create-update-846z4" Feb 16 13:53:35 crc kubenswrapper[4812]: I0216 13:53:35.428783 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/862482a0-fe2f-481c-a819-4539a198dc9d-operator-scripts\") pod \"neutron-08d4-account-create-update-846z4\" (UID: \"862482a0-fe2f-481c-a819-4539a198dc9d\") " pod="openstack/neutron-08d4-account-create-update-846z4" Feb 16 13:53:35 crc kubenswrapper[4812]: I0216 13:53:35.429540 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/862482a0-fe2f-481c-a819-4539a198dc9d-operator-scripts\") pod \"neutron-08d4-account-create-update-846z4\" (UID: \"862482a0-fe2f-481c-a819-4539a198dc9d\") " pod="openstack/neutron-08d4-account-create-update-846z4" Feb 16 13:53:35 crc kubenswrapper[4812]: I0216 13:53:35.454420 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9twwj\" (UniqueName: \"kubernetes.io/projected/862482a0-fe2f-481c-a819-4539a198dc9d-kube-api-access-9twwj\") pod \"neutron-08d4-account-create-update-846z4\" (UID: \"862482a0-fe2f-481c-a819-4539a198dc9d\") " pod="openstack/neutron-08d4-account-create-update-846z4" Feb 16 13:53:35 crc kubenswrapper[4812]: I0216 13:53:35.479372 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-x869h" Feb 16 13:53:35 crc kubenswrapper[4812]: I0216 13:53:35.540195 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-08d4-account-create-update-846z4" Feb 16 13:53:37 crc kubenswrapper[4812]: E0216 13:53:37.171763 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Feb 16 13:53:37 crc kubenswrapper[4812]: E0216 13:53:37.172209 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hcc8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-mwzf9_openstack(03e0b815-7641-435c-9934-05f5c5307962): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 13:53:37 crc kubenswrapper[4812]: E0216 13:53:37.173382 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-mwzf9" podUID="03e0b815-7641-435c-9934-05f5c5307962" Feb 16 13:53:37 crc kubenswrapper[4812]: I0216 13:53:37.236883 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7dzhm-config-v8vrv" event={"ID":"57dec6e7-2b03-417e-9aca-c535aabeba2a","Type":"ContainerDied","Data":"105f892b5cb7358e9a4a517ceebed742e6734c067ba3a71d1b35fac0a845d70a"} Feb 16 13:53:37 crc kubenswrapper[4812]: I0216 13:53:37.236941 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="105f892b5cb7358e9a4a517ceebed742e6734c067ba3a71d1b35fac0a845d70a" Feb 16 13:53:37 crc kubenswrapper[4812]: E0216 13:53:37.249206 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-mwzf9" podUID="03e0b815-7641-435c-9934-05f5c5307962" Feb 16 13:53:37 crc kubenswrapper[4812]: I0216 13:53:37.267281 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7dzhm-config-v8vrv" Feb 16 13:53:37 crc kubenswrapper[4812]: I0216 13:53:37.373164 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/57dec6e7-2b03-417e-9aca-c535aabeba2a-var-run\") pod \"57dec6e7-2b03-417e-9aca-c535aabeba2a\" (UID: \"57dec6e7-2b03-417e-9aca-c535aabeba2a\") " Feb 16 13:53:37 crc kubenswrapper[4812]: I0216 13:53:37.373236 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/57dec6e7-2b03-417e-9aca-c535aabeba2a-additional-scripts\") pod \"57dec6e7-2b03-417e-9aca-c535aabeba2a\" (UID: \"57dec6e7-2b03-417e-9aca-c535aabeba2a\") " Feb 16 13:53:37 crc kubenswrapper[4812]: I0216 13:53:37.373265 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/57dec6e7-2b03-417e-9aca-c535aabeba2a-var-run-ovn\") pod \"57dec6e7-2b03-417e-9aca-c535aabeba2a\" (UID: \"57dec6e7-2b03-417e-9aca-c535aabeba2a\") " Feb 16 13:53:37 crc kubenswrapper[4812]: I0216 13:53:37.373321 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lv2xb\" (UniqueName: \"kubernetes.io/projected/57dec6e7-2b03-417e-9aca-c535aabeba2a-kube-api-access-lv2xb\") pod \"57dec6e7-2b03-417e-9aca-c535aabeba2a\" (UID: \"57dec6e7-2b03-417e-9aca-c535aabeba2a\") " Feb 16 13:53:37 crc kubenswrapper[4812]: I0216 13:53:37.373476 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57dec6e7-2b03-417e-9aca-c535aabeba2a-scripts\") pod \"57dec6e7-2b03-417e-9aca-c535aabeba2a\" (UID: \"57dec6e7-2b03-417e-9aca-c535aabeba2a\") " Feb 16 13:53:37 crc kubenswrapper[4812]: I0216 13:53:37.373573 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/57dec6e7-2b03-417e-9aca-c535aabeba2a-var-log-ovn\") pod \"57dec6e7-2b03-417e-9aca-c535aabeba2a\" (UID: \"57dec6e7-2b03-417e-9aca-c535aabeba2a\") " Feb 16 13:53:37 crc kubenswrapper[4812]: I0216 13:53:37.374668 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57dec6e7-2b03-417e-9aca-c535aabeba2a-var-run" (OuterVolumeSpecName: "var-run") pod "57dec6e7-2b03-417e-9aca-c535aabeba2a" (UID: "57dec6e7-2b03-417e-9aca-c535aabeba2a"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:53:37 crc kubenswrapper[4812]: I0216 13:53:37.375499 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57dec6e7-2b03-417e-9aca-c535aabeba2a-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "57dec6e7-2b03-417e-9aca-c535aabeba2a" (UID: "57dec6e7-2b03-417e-9aca-c535aabeba2a"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:53:37 crc kubenswrapper[4812]: I0216 13:53:37.375626 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57dec6e7-2b03-417e-9aca-c535aabeba2a-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "57dec6e7-2b03-417e-9aca-c535aabeba2a" (UID: "57dec6e7-2b03-417e-9aca-c535aabeba2a"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:53:37 crc kubenswrapper[4812]: I0216 13:53:37.376471 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57dec6e7-2b03-417e-9aca-c535aabeba2a-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "57dec6e7-2b03-417e-9aca-c535aabeba2a" (UID: "57dec6e7-2b03-417e-9aca-c535aabeba2a"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:53:37 crc kubenswrapper[4812]: I0216 13:53:37.377026 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57dec6e7-2b03-417e-9aca-c535aabeba2a-scripts" (OuterVolumeSpecName: "scripts") pod "57dec6e7-2b03-417e-9aca-c535aabeba2a" (UID: "57dec6e7-2b03-417e-9aca-c535aabeba2a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:53:37 crc kubenswrapper[4812]: I0216 13:53:37.382243 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57dec6e7-2b03-417e-9aca-c535aabeba2a-kube-api-access-lv2xb" (OuterVolumeSpecName: "kube-api-access-lv2xb") pod "57dec6e7-2b03-417e-9aca-c535aabeba2a" (UID: "57dec6e7-2b03-417e-9aca-c535aabeba2a"). InnerVolumeSpecName "kube-api-access-lv2xb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:53:37 crc kubenswrapper[4812]: I0216 13:53:37.475724 4812 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/57dec6e7-2b03-417e-9aca-c535aabeba2a-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:37 crc kubenswrapper[4812]: I0216 13:53:37.475765 4812 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/57dec6e7-2b03-417e-9aca-c535aabeba2a-var-run\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:37 crc kubenswrapper[4812]: I0216 13:53:37.475780 4812 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/57dec6e7-2b03-417e-9aca-c535aabeba2a-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:37 crc kubenswrapper[4812]: I0216 13:53:37.475796 4812 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/57dec6e7-2b03-417e-9aca-c535aabeba2a-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:37 crc kubenswrapper[4812]: I0216 13:53:37.475808 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lv2xb\" (UniqueName: \"kubernetes.io/projected/57dec6e7-2b03-417e-9aca-c535aabeba2a-kube-api-access-lv2xb\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:37 crc kubenswrapper[4812]: I0216 13:53:37.475821 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57dec6e7-2b03-417e-9aca-c535aabeba2a-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:38 crc kubenswrapper[4812]: I0216 13:53:38.264357 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7dzhm-config-v8vrv" Feb 16 13:53:38 crc kubenswrapper[4812]: I0216 13:53:38.266511 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7f34d582-3b55-4d2a-91b3-c64acd57981f","Type":"ContainerStarted","Data":"2c0e928b4e58b0a07d8d804ec88facbdc538591ac1eb66973fbab4c866e9ec85"} Feb 16 13:53:38 crc kubenswrapper[4812]: I0216 13:53:38.316646 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-c7brg"] Feb 16 13:53:38 crc kubenswrapper[4812]: I0216 13:53:38.369699 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-b95b-account-create-update-bhdws"] Feb 16 13:53:38 crc kubenswrapper[4812]: I0216 13:53:38.388014 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-x869h"] Feb 16 13:53:38 crc kubenswrapper[4812]: I0216 13:53:38.548543 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-08d4-account-create-update-846z4"] Feb 16 13:53:38 crc kubenswrapper[4812]: I0216 13:53:38.573888 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-p8djg"] Feb 16 13:53:38 crc kubenswrapper[4812]: W0216 13:53:38.604890 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda96961c7_e6f3_4cbc_8498_b9e5f023ad2d.slice/crio-d3ebd62b60663203552b127637ee1753ddb2416b78622eddce10a10ae145f636 WatchSource:0}: Error finding container d3ebd62b60663203552b127637ee1753ddb2416b78622eddce10a10ae145f636: Status 404 returned error can't find the container with id d3ebd62b60663203552b127637ee1753ddb2416b78622eddce10a10ae145f636 Feb 16 13:53:38 crc kubenswrapper[4812]: I0216 13:53:38.648404 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-eed8-account-create-update-zs2zn"] Feb 16 13:53:38 crc kubenswrapper[4812]: I0216 13:53:38.738672 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-7dzhm-config-v8vrv"] Feb 16 13:53:38 crc kubenswrapper[4812]: I0216 13:53:38.769739 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-7dzhm-config-v8vrv"] Feb 16 13:53:38 crc kubenswrapper[4812]: I0216 13:53:38.797945 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-cbl5p"] Feb 16 13:53:38 crc kubenswrapper[4812]: I0216 13:53:38.822396 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-466e-account-create-update-22bqw"] Feb 16 13:53:38 crc kubenswrapper[4812]: I0216 13:53:38.852528 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-mjkrp"] Feb 16 13:53:39 crc kubenswrapper[4812]: W0216 13:53:39.073840 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5d00fb2_e93c_4b84_b307_f322137b1be4.slice/crio-006baff2fc15ed7f13d80d429c96443d95438fe376582bc395b5b72178e89b8e WatchSource:0}: Error finding container 006baff2fc15ed7f13d80d429c96443d95438fe376582bc395b5b72178e89b8e: Status 404 returned error can't find the container with id 006baff2fc15ed7f13d80d429c96443d95438fe376582bc395b5b72178e89b8e Feb 16 13:53:39 crc kubenswrapper[4812]: I0216 13:53:39.289050 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7f34d582-3b55-4d2a-91b3-c64acd57981f","Type":"ContainerStarted","Data":"c1bb4212eb46736361a4870976d99c591f4d28bc471dc63eeed66962cc2ac065"} Feb 16 13:53:39 crc kubenswrapper[4812]: I0216 13:53:39.290362 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-b95b-account-create-update-bhdws" event={"ID":"3aee3570-b33b-4898-ad56-62202a1dd25b","Type":"ContainerStarted","Data":"f697cb5ad8da805fc9b19666d4bd5484dc5ec407b25e1282a0864ff968347da7"} Feb 16 13:53:39 crc kubenswrapper[4812]: I0216 13:53:39.291703 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-x869h" event={"ID":"4445f438-ce8c-4014-ad6f-b892beed381a","Type":"ContainerStarted","Data":"38146dde31abbe42429203525796659f2696f8ba6195bb373550d9ef46048bdd"} Feb 16 13:53:39 crc kubenswrapper[4812]: I0216 13:53:39.291730 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-x869h" event={"ID":"4445f438-ce8c-4014-ad6f-b892beed381a","Type":"ContainerStarted","Data":"5ebdcf119a596782784fdd6aafae7099bc5a8b679277c2aef7de2bf25c305739"} Feb 16 13:53:39 crc kubenswrapper[4812]: I0216 13:53:39.295999 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-cbl5p" event={"ID":"870350e2-3c24-4788-afb1-8d5a4d77172e","Type":"ContainerStarted","Data":"db4d8f17c750499cf2ac0e50c22830ea187171602bee578e0923205f718fd6f7"} Feb 16 13:53:39 crc kubenswrapper[4812]: I0216 13:53:39.306279 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-08d4-account-create-update-846z4" event={"ID":"862482a0-fe2f-481c-a819-4539a198dc9d","Type":"ContainerStarted","Data":"c7ce3707d3dd8f34be7c672c96c7336f0058331114a96b125ebf5e4168bdb79b"} Feb 16 13:53:39 crc kubenswrapper[4812]: I0216 13:53:39.306342 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-08d4-account-create-update-846z4" event={"ID":"862482a0-fe2f-481c-a819-4539a198dc9d","Type":"ContainerStarted","Data":"663dd5ab12b5bbb3244bea9ffe70b8a4e3f4f84e77dde58f78a03f5cb1262440"} Feb 16 13:53:39 crc kubenswrapper[4812]: I0216 13:53:39.310595 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-eed8-account-create-update-zs2zn" event={"ID":"49d92a9c-6e64-409e-a324-0061b9b451d0","Type":"ContainerStarted","Data":"e01fdef968ea944fb0872f2d19076993a12b4d52fe0f66317aecfa9138fcf303"} Feb 16 13:53:39 crc kubenswrapper[4812]: I0216 13:53:39.322788 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-x869h" podStartSLOduration=5.322759054 podStartE2EDuration="5.322759054s" podCreationTimestamp="2026-02-16 13:53:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:53:39.313420463 +0000 UTC m=+1308.377751164" watchObservedRunningTime="2026-02-16 13:53:39.322759054 +0000 UTC m=+1308.387089755" Feb 16 13:53:39 crc kubenswrapper[4812]: I0216 13:53:39.323816 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-c7brg" event={"ID":"919aaed2-0230-4b07-aea8-fb57e6917cff","Type":"ContainerStarted","Data":"500986678d16a03a2901441aff2e36d9b11c4927d07d57bb2f3ce9a3dbfba439"} Feb 16 13:53:39 crc kubenswrapper[4812]: I0216 13:53:39.329578 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-466e-account-create-update-22bqw" event={"ID":"fde98520-6555-417b-851c-14dccde518ad","Type":"ContainerStarted","Data":"d56350d7739df68b31c7d1c32ed338d6c2542bc41a50358b0aec09dd85ce5efb"} Feb 16 13:53:39 crc kubenswrapper[4812]: I0216 13:53:39.333356 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-08d4-account-create-update-846z4" podStartSLOduration=5.333339842 podStartE2EDuration="5.333339842s" podCreationTimestamp="2026-02-16 13:53:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:53:39.329401687 +0000 UTC m=+1308.393732388" watchObservedRunningTime="2026-02-16 13:53:39.333339842 +0000 UTC m=+1308.397670543" Feb 16 13:53:39 crc kubenswrapper[4812]: I0216 13:53:39.340228 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-mjkrp" event={"ID":"c5d00fb2-e93c-4b84-b307-f322137b1be4","Type":"ContainerStarted","Data":"006baff2fc15ed7f13d80d429c96443d95438fe376582bc395b5b72178e89b8e"} Feb 16 13:53:39 crc kubenswrapper[4812]: I0216 13:53:39.344060 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-p8djg" event={"ID":"a96961c7-e6f3-4cbc-8498-b9e5f023ad2d","Type":"ContainerStarted","Data":"d3ebd62b60663203552b127637ee1753ddb2416b78622eddce10a10ae145f636"} Feb 16 13:53:39 crc kubenswrapper[4812]: I0216 13:53:39.901166 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57dec6e7-2b03-417e-9aca-c535aabeba2a" path="/var/lib/kubelet/pods/57dec6e7-2b03-417e-9aca-c535aabeba2a/volumes" Feb 16 13:53:40 crc kubenswrapper[4812]: I0216 13:53:40.364901 4812 generic.go:334] "Generic (PLEG): container finished" podID="870350e2-3c24-4788-afb1-8d5a4d77172e" containerID="3bf82e30323b293ee35e9dc25e26e8fd94d821b15470c91b5afba006e95d7adc" exitCode=0 Feb 16 13:53:40 crc kubenswrapper[4812]: I0216 13:53:40.365267 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-cbl5p" event={"ID":"870350e2-3c24-4788-afb1-8d5a4d77172e","Type":"ContainerDied","Data":"3bf82e30323b293ee35e9dc25e26e8fd94d821b15470c91b5afba006e95d7adc"} Feb 16 13:53:40 crc kubenswrapper[4812]: I0216 13:53:40.367377 4812 generic.go:334] "Generic (PLEG): container finished" podID="c5d00fb2-e93c-4b84-b307-f322137b1be4" containerID="c9dae9e1e5ca837361102ca3bb5914434a73e943616c143e80467e4d838fcb65" exitCode=0 Feb 16 13:53:40 crc kubenswrapper[4812]: I0216 13:53:40.367513 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-mjkrp" event={"ID":"c5d00fb2-e93c-4b84-b307-f322137b1be4","Type":"ContainerDied","Data":"c9dae9e1e5ca837361102ca3bb5914434a73e943616c143e80467e4d838fcb65"} Feb 16 13:53:40 crc kubenswrapper[4812]: I0216 13:53:40.369885 4812 generic.go:334] "Generic (PLEG): container finished" podID="a96961c7-e6f3-4cbc-8498-b9e5f023ad2d" containerID="50371a462b9a7d6943988bc67f7d3c1d2fc29fcc3cecae39c2179648bf384e2a" exitCode=0 Feb 16 13:53:40 crc kubenswrapper[4812]: I0216 13:53:40.369989 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-p8djg" event={"ID":"a96961c7-e6f3-4cbc-8498-b9e5f023ad2d","Type":"ContainerDied","Data":"50371a462b9a7d6943988bc67f7d3c1d2fc29fcc3cecae39c2179648bf384e2a"} Feb 16 13:53:40 crc kubenswrapper[4812]: I0216 13:53:40.375300 4812 generic.go:334] "Generic (PLEG): container finished" podID="862482a0-fe2f-481c-a819-4539a198dc9d" containerID="c7ce3707d3dd8f34be7c672c96c7336f0058331114a96b125ebf5e4168bdb79b" exitCode=0 Feb 16 13:53:40 crc kubenswrapper[4812]: I0216 13:53:40.375419 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-08d4-account-create-update-846z4" event={"ID":"862482a0-fe2f-481c-a819-4539a198dc9d","Type":"ContainerDied","Data":"c7ce3707d3dd8f34be7c672c96c7336f0058331114a96b125ebf5e4168bdb79b"} Feb 16 13:53:40 crc kubenswrapper[4812]: I0216 13:53:40.382130 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7f34d582-3b55-4d2a-91b3-c64acd57981f","Type":"ContainerStarted","Data":"5395e15ef4619d0ea842eac4d9afb1ad83047940471490fc84255fbc0bd9756c"} Feb 16 13:53:40 crc kubenswrapper[4812]: I0216 13:53:40.382262 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7f34d582-3b55-4d2a-91b3-c64acd57981f","Type":"ContainerStarted","Data":"2143e66fac740f46e0f8cf8f56c467df55ec2bff7e7f0a021dcee1c46202b0f5"} Feb 16 13:53:40 crc kubenswrapper[4812]: I0216 13:53:40.391977 4812 generic.go:334] "Generic (PLEG): container finished" podID="fde98520-6555-417b-851c-14dccde518ad" containerID="5fea5389c5170fdba10b84a6e88b8a99cfa8c7b6bcc240ddfac70cd07febbf90" exitCode=0 Feb 16 13:53:40 crc kubenswrapper[4812]: I0216 13:53:40.392085 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-466e-account-create-update-22bqw" event={"ID":"fde98520-6555-417b-851c-14dccde518ad","Type":"ContainerDied","Data":"5fea5389c5170fdba10b84a6e88b8a99cfa8c7b6bcc240ddfac70cd07febbf90"} Feb 16 13:53:40 crc kubenswrapper[4812]: I0216 13:53:40.402964 4812 generic.go:334] "Generic (PLEG): container finished" podID="3aee3570-b33b-4898-ad56-62202a1dd25b" containerID="8e11f2007b770e95e73f5cf461bada711f11a105feac23ff151108f101f2a3fa" exitCode=0 Feb 16 13:53:40 crc kubenswrapper[4812]: I0216 13:53:40.403169 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-b95b-account-create-update-bhdws" event={"ID":"3aee3570-b33b-4898-ad56-62202a1dd25b","Type":"ContainerDied","Data":"8e11f2007b770e95e73f5cf461bada711f11a105feac23ff151108f101f2a3fa"} Feb 16 13:53:40 crc kubenswrapper[4812]: I0216 13:53:40.405387 4812 generic.go:334] "Generic (PLEG): container finished" podID="4445f438-ce8c-4014-ad6f-b892beed381a" containerID="38146dde31abbe42429203525796659f2696f8ba6195bb373550d9ef46048bdd" exitCode=0 Feb 16 13:53:40 crc kubenswrapper[4812]: I0216 13:53:40.405620 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-x869h" event={"ID":"4445f438-ce8c-4014-ad6f-b892beed381a","Type":"ContainerDied","Data":"38146dde31abbe42429203525796659f2696f8ba6195bb373550d9ef46048bdd"} Feb 16 13:53:40 crc kubenswrapper[4812]: I0216 13:53:40.411510 4812 generic.go:334] "Generic (PLEG): container finished" podID="49d92a9c-6e64-409e-a324-0061b9b451d0" containerID="41fe5f5b2186e0fbaa128acb0c5839bc16ef9fe777a37983f299d271181c1325" exitCode=0 Feb 16 13:53:40 crc kubenswrapper[4812]: I0216 13:53:40.411584 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-eed8-account-create-update-zs2zn" event={"ID":"49d92a9c-6e64-409e-a324-0061b9b451d0","Type":"ContainerDied","Data":"41fe5f5b2186e0fbaa128acb0c5839bc16ef9fe777a37983f299d271181c1325"} Feb 16 13:53:41 crc kubenswrapper[4812]: I0216 13:53:41.010997 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 13:53:41 crc kubenswrapper[4812]: I0216 13:53:41.426248 4812 generic.go:334] "Generic (PLEG): container finished" podID="e02a9868-e12c-4a65-9ba5-4a5965131b5b" containerID="bd870744e6d645686b23ecaf761646cbeb08e898465be552377c3334631d1441" exitCode=0 Feb 16 13:53:41 crc kubenswrapper[4812]: I0216 13:53:41.426625 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e02a9868-e12c-4a65-9ba5-4a5965131b5b","Type":"ContainerDied","Data":"bd870744e6d645686b23ecaf761646cbeb08e898465be552377c3334631d1441"} Feb 16 13:53:44 crc kubenswrapper[4812]: I0216 13:53:44.549076 4812 patch_prober.go:28] interesting pod/machine-config-daemon-c6mn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:53:44 crc kubenswrapper[4812]: I0216 13:53:44.549489 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:53:45 crc kubenswrapper[4812]: I0216 13:53:45.433726 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-b95b-account-create-update-bhdws" Feb 16 13:53:45 crc kubenswrapper[4812]: I0216 13:53:45.459361 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-eed8-account-create-update-zs2zn" Feb 16 13:53:45 crc kubenswrapper[4812]: I0216 13:53:45.508239 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-b95b-account-create-update-bhdws" event={"ID":"3aee3570-b33b-4898-ad56-62202a1dd25b","Type":"ContainerDied","Data":"f697cb5ad8da805fc9b19666d4bd5484dc5ec407b25e1282a0864ff968347da7"} Feb 16 13:53:45 crc kubenswrapper[4812]: I0216 13:53:45.508291 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f697cb5ad8da805fc9b19666d4bd5484dc5ec407b25e1282a0864ff968347da7" Feb 16 13:53:45 crc kubenswrapper[4812]: I0216 13:53:45.508369 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-b95b-account-create-update-bhdws" Feb 16 13:53:45 crc kubenswrapper[4812]: I0216 13:53:45.525833 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-eed8-account-create-update-zs2zn" event={"ID":"49d92a9c-6e64-409e-a324-0061b9b451d0","Type":"ContainerDied","Data":"e01fdef968ea944fb0872f2d19076993a12b4d52fe0f66317aecfa9138fcf303"} Feb 16 13:53:45 crc kubenswrapper[4812]: I0216 13:53:45.525878 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e01fdef968ea944fb0872f2d19076993a12b4d52fe0f66317aecfa9138fcf303" Feb 16 13:53:45 crc kubenswrapper[4812]: I0216 13:53:45.525928 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-eed8-account-create-update-zs2zn" Feb 16 13:53:45 crc kubenswrapper[4812]: I0216 13:53:45.545306 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgf5m\" (UniqueName: \"kubernetes.io/projected/3aee3570-b33b-4898-ad56-62202a1dd25b-kube-api-access-bgf5m\") pod \"3aee3570-b33b-4898-ad56-62202a1dd25b\" (UID: \"3aee3570-b33b-4898-ad56-62202a1dd25b\") " Feb 16 13:53:45 crc kubenswrapper[4812]: I0216 13:53:45.545672 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3aee3570-b33b-4898-ad56-62202a1dd25b-operator-scripts\") pod \"3aee3570-b33b-4898-ad56-62202a1dd25b\" (UID: \"3aee3570-b33b-4898-ad56-62202a1dd25b\") " Feb 16 13:53:45 crc kubenswrapper[4812]: I0216 13:53:45.547044 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3aee3570-b33b-4898-ad56-62202a1dd25b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3aee3570-b33b-4898-ad56-62202a1dd25b" (UID: "3aee3570-b33b-4898-ad56-62202a1dd25b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:53:45 crc kubenswrapper[4812]: I0216 13:53:45.553797 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aee3570-b33b-4898-ad56-62202a1dd25b-kube-api-access-bgf5m" (OuterVolumeSpecName: "kube-api-access-bgf5m") pod "3aee3570-b33b-4898-ad56-62202a1dd25b" (UID: "3aee3570-b33b-4898-ad56-62202a1dd25b"). InnerVolumeSpecName "kube-api-access-bgf5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:53:45 crc kubenswrapper[4812]: I0216 13:53:45.646863 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2slv\" (UniqueName: \"kubernetes.io/projected/49d92a9c-6e64-409e-a324-0061b9b451d0-kube-api-access-v2slv\") pod \"49d92a9c-6e64-409e-a324-0061b9b451d0\" (UID: \"49d92a9c-6e64-409e-a324-0061b9b451d0\") " Feb 16 13:53:45 crc kubenswrapper[4812]: I0216 13:53:45.646981 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49d92a9c-6e64-409e-a324-0061b9b451d0-operator-scripts\") pod \"49d92a9c-6e64-409e-a324-0061b9b451d0\" (UID: \"49d92a9c-6e64-409e-a324-0061b9b451d0\") " Feb 16 13:53:45 crc kubenswrapper[4812]: I0216 13:53:45.647614 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3aee3570-b33b-4898-ad56-62202a1dd25b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:45 crc kubenswrapper[4812]: I0216 13:53:45.647639 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgf5m\" (UniqueName: \"kubernetes.io/projected/3aee3570-b33b-4898-ad56-62202a1dd25b-kube-api-access-bgf5m\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:45 crc kubenswrapper[4812]: I0216 13:53:45.647897 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49d92a9c-6e64-409e-a324-0061b9b451d0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "49d92a9c-6e64-409e-a324-0061b9b451d0" (UID: "49d92a9c-6e64-409e-a324-0061b9b451d0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:53:45 crc kubenswrapper[4812]: I0216 13:53:45.652089 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49d92a9c-6e64-409e-a324-0061b9b451d0-kube-api-access-v2slv" (OuterVolumeSpecName: "kube-api-access-v2slv") pod "49d92a9c-6e64-409e-a324-0061b9b451d0" (UID: "49d92a9c-6e64-409e-a324-0061b9b451d0"). InnerVolumeSpecName "kube-api-access-v2slv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:53:45 crc kubenswrapper[4812]: I0216 13:53:45.750206 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2slv\" (UniqueName: \"kubernetes.io/projected/49d92a9c-6e64-409e-a324-0061b9b451d0-kube-api-access-v2slv\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:45 crc kubenswrapper[4812]: I0216 13:53:45.750266 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49d92a9c-6e64-409e-a324-0061b9b451d0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.048032 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-x869h" Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.053831 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mjkrp" Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.072323 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-cbl5p" Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.095526 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-p8djg" Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.114010 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-466e-account-create-update-22bqw" Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.114880 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-08d4-account-create-update-846z4" Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.213865 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4445f438-ce8c-4014-ad6f-b892beed381a-operator-scripts\") pod \"4445f438-ce8c-4014-ad6f-b892beed381a\" (UID: \"4445f438-ce8c-4014-ad6f-b892beed381a\") " Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.213938 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpfw5\" (UniqueName: \"kubernetes.io/projected/fde98520-6555-417b-851c-14dccde518ad-kube-api-access-kpfw5\") pod \"fde98520-6555-417b-851c-14dccde518ad\" (UID: \"fde98520-6555-417b-851c-14dccde518ad\") " Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.214032 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8spcw\" (UniqueName: \"kubernetes.io/projected/a96961c7-e6f3-4cbc-8498-b9e5f023ad2d-kube-api-access-8spcw\") pod \"a96961c7-e6f3-4cbc-8498-b9e5f023ad2d\" (UID: \"a96961c7-e6f3-4cbc-8498-b9e5f023ad2d\") " Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.214084 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spg87\" (UniqueName: \"kubernetes.io/projected/870350e2-3c24-4788-afb1-8d5a4d77172e-kube-api-access-spg87\") pod \"870350e2-3c24-4788-afb1-8d5a4d77172e\" (UID: \"870350e2-3c24-4788-afb1-8d5a4d77172e\") " Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.214132 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fde98520-6555-417b-851c-14dccde518ad-operator-scripts\") pod \"fde98520-6555-417b-851c-14dccde518ad\" (UID: \"fde98520-6555-417b-851c-14dccde518ad\") " Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.214180 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a96961c7-e6f3-4cbc-8498-b9e5f023ad2d-operator-scripts\") pod \"a96961c7-e6f3-4cbc-8498-b9e5f023ad2d\" (UID: \"a96961c7-e6f3-4cbc-8498-b9e5f023ad2d\") " Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.214221 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5d00fb2-e93c-4b84-b307-f322137b1be4-operator-scripts\") pod \"c5d00fb2-e93c-4b84-b307-f322137b1be4\" (UID: \"c5d00fb2-e93c-4b84-b307-f322137b1be4\") " Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.214284 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/870350e2-3c24-4788-afb1-8d5a4d77172e-operator-scripts\") pod \"870350e2-3c24-4788-afb1-8d5a4d77172e\" (UID: \"870350e2-3c24-4788-afb1-8d5a4d77172e\") " Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.214368 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6x8bq\" (UniqueName: \"kubernetes.io/projected/4445f438-ce8c-4014-ad6f-b892beed381a-kube-api-access-6x8bq\") pod \"4445f438-ce8c-4014-ad6f-b892beed381a\" (UID: \"4445f438-ce8c-4014-ad6f-b892beed381a\") " Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.214409 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzxxl\" (UniqueName: \"kubernetes.io/projected/c5d00fb2-e93c-4b84-b307-f322137b1be4-kube-api-access-xzxxl\") pod \"c5d00fb2-e93c-4b84-b307-f322137b1be4\" (UID: \"c5d00fb2-e93c-4b84-b307-f322137b1be4\") " Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.222849 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5d00fb2-e93c-4b84-b307-f322137b1be4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c5d00fb2-e93c-4b84-b307-f322137b1be4" (UID: "c5d00fb2-e93c-4b84-b307-f322137b1be4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.236944 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a96961c7-e6f3-4cbc-8498-b9e5f023ad2d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a96961c7-e6f3-4cbc-8498-b9e5f023ad2d" (UID: "a96961c7-e6f3-4cbc-8498-b9e5f023ad2d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.242442 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4445f438-ce8c-4014-ad6f-b892beed381a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4445f438-ce8c-4014-ad6f-b892beed381a" (UID: "4445f438-ce8c-4014-ad6f-b892beed381a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.245791 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/870350e2-3c24-4788-afb1-8d5a4d77172e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "870350e2-3c24-4788-afb1-8d5a4d77172e" (UID: "870350e2-3c24-4788-afb1-8d5a4d77172e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.413570 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fde98520-6555-417b-851c-14dccde518ad-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fde98520-6555-417b-851c-14dccde518ad" (UID: "fde98520-6555-417b-851c-14dccde518ad"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.418690 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fde98520-6555-417b-851c-14dccde518ad-kube-api-access-kpfw5" (OuterVolumeSpecName: "kube-api-access-kpfw5") pod "fde98520-6555-417b-851c-14dccde518ad" (UID: "fde98520-6555-417b-851c-14dccde518ad"). InnerVolumeSpecName "kube-api-access-kpfw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.569146 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a96961c7-e6f3-4cbc-8498-b9e5f023ad2d-kube-api-access-8spcw" (OuterVolumeSpecName: "kube-api-access-8spcw") pod "a96961c7-e6f3-4cbc-8498-b9e5f023ad2d" (UID: "a96961c7-e6f3-4cbc-8498-b9e5f023ad2d"). InnerVolumeSpecName "kube-api-access-8spcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.569404 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5d00fb2-e93c-4b84-b307-f322137b1be4-kube-api-access-xzxxl" (OuterVolumeSpecName: "kube-api-access-xzxxl") pod "c5d00fb2-e93c-4b84-b307-f322137b1be4" (UID: "c5d00fb2-e93c-4b84-b307-f322137b1be4"). InnerVolumeSpecName "kube-api-access-xzxxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.572600 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/870350e2-3c24-4788-afb1-8d5a4d77172e-kube-api-access-spg87" (OuterVolumeSpecName: "kube-api-access-spg87") pod "870350e2-3c24-4788-afb1-8d5a4d77172e" (UID: "870350e2-3c24-4788-afb1-8d5a4d77172e"). InnerVolumeSpecName "kube-api-access-spg87". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.590108 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4445f438-ce8c-4014-ad6f-b892beed381a-kube-api-access-6x8bq" (OuterVolumeSpecName: "kube-api-access-6x8bq") pod "4445f438-ce8c-4014-ad6f-b892beed381a" (UID: "4445f438-ce8c-4014-ad6f-b892beed381a"). InnerVolumeSpecName "kube-api-access-6x8bq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.610308 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a96961c7-e6f3-4cbc-8498-b9e5f023ad2d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.610368 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5d00fb2-e93c-4b84-b307-f322137b1be4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.610382 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/870350e2-3c24-4788-afb1-8d5a4d77172e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.610394 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6x8bq\" (UniqueName: \"kubernetes.io/projected/4445f438-ce8c-4014-ad6f-b892beed381a-kube-api-access-6x8bq\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.610710 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzxxl\" (UniqueName: \"kubernetes.io/projected/c5d00fb2-e93c-4b84-b307-f322137b1be4-kube-api-access-xzxxl\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.610726 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4445f438-ce8c-4014-ad6f-b892beed381a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.610738 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpfw5\" (UniqueName: \"kubernetes.io/projected/fde98520-6555-417b-851c-14dccde518ad-kube-api-access-kpfw5\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.610748 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8spcw\" (UniqueName: \"kubernetes.io/projected/a96961c7-e6f3-4cbc-8498-b9e5f023ad2d-kube-api-access-8spcw\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.610757 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spg87\" (UniqueName: \"kubernetes.io/projected/870350e2-3c24-4788-afb1-8d5a4d77172e-kube-api-access-spg87\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.610771 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fde98520-6555-417b-851c-14dccde518ad-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.630579 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-08d4-account-create-update-846z4" event={"ID":"862482a0-fe2f-481c-a819-4539a198dc9d","Type":"ContainerDied","Data":"663dd5ab12b5bbb3244bea9ffe70b8a4e3f4f84e77dde58f78a03f5cb1262440"} Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.630629 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="663dd5ab12b5bbb3244bea9ffe70b8a4e3f4f84e77dde58f78a03f5cb1262440" Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.630741 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-08d4-account-create-update-846z4" Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.658725 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-466e-account-create-update-22bqw" event={"ID":"fde98520-6555-417b-851c-14dccde518ad","Type":"ContainerDied","Data":"d56350d7739df68b31c7d1c32ed338d6c2542bc41a50358b0aec09dd85ce5efb"} Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.658794 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d56350d7739df68b31c7d1c32ed338d6c2542bc41a50358b0aec09dd85ce5efb" Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.658769 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-466e-account-create-update-22bqw" Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.695297 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-x869h" event={"ID":"4445f438-ce8c-4014-ad6f-b892beed381a","Type":"ContainerDied","Data":"5ebdcf119a596782784fdd6aafae7099bc5a8b679277c2aef7de2bf25c305739"} Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.695356 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ebdcf119a596782784fdd6aafae7099bc5a8b679277c2aef7de2bf25c305739" Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.695486 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-x869h" Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.710711 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-cbl5p" event={"ID":"870350e2-3c24-4788-afb1-8d5a4d77172e","Type":"ContainerDied","Data":"db4d8f17c750499cf2ac0e50c22830ea187171602bee578e0923205f718fd6f7"} Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.710796 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db4d8f17c750499cf2ac0e50c22830ea187171602bee578e0923205f718fd6f7" Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.710863 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-cbl5p" Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.712038 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9twwj\" (UniqueName: \"kubernetes.io/projected/862482a0-fe2f-481c-a819-4539a198dc9d-kube-api-access-9twwj\") pod \"862482a0-fe2f-481c-a819-4539a198dc9d\" (UID: \"862482a0-fe2f-481c-a819-4539a198dc9d\") " Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.712330 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/862482a0-fe2f-481c-a819-4539a198dc9d-operator-scripts\") pod \"862482a0-fe2f-481c-a819-4539a198dc9d\" (UID: \"862482a0-fe2f-481c-a819-4539a198dc9d\") " Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.722308 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-mjkrp" event={"ID":"c5d00fb2-e93c-4b84-b307-f322137b1be4","Type":"ContainerDied","Data":"006baff2fc15ed7f13d80d429c96443d95438fe376582bc395b5b72178e89b8e"} Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.722374 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="006baff2fc15ed7f13d80d429c96443d95438fe376582bc395b5b72178e89b8e" Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.722492 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mjkrp" Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.722733 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/862482a0-fe2f-481c-a819-4539a198dc9d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "862482a0-fe2f-481c-a819-4539a198dc9d" (UID: "862482a0-fe2f-481c-a819-4539a198dc9d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.730382 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/862482a0-fe2f-481c-a819-4539a198dc9d-kube-api-access-9twwj" (OuterVolumeSpecName: "kube-api-access-9twwj") pod "862482a0-fe2f-481c-a819-4539a198dc9d" (UID: "862482a0-fe2f-481c-a819-4539a198dc9d"). InnerVolumeSpecName "kube-api-access-9twwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.756594 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-p8djg" event={"ID":"a96961c7-e6f3-4cbc-8498-b9e5f023ad2d","Type":"ContainerDied","Data":"d3ebd62b60663203552b127637ee1753ddb2416b78622eddce10a10ae145f636"} Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.756644 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3ebd62b60663203552b127637ee1753ddb2416b78622eddce10a10ae145f636" Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.756732 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-p8djg" Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.814528 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9twwj\" (UniqueName: \"kubernetes.io/projected/862482a0-fe2f-481c-a819-4539a198dc9d-kube-api-access-9twwj\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:46 crc kubenswrapper[4812]: I0216 13:53:46.814560 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/862482a0-fe2f-481c-a819-4539a198dc9d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:47 crc kubenswrapper[4812]: I0216 13:53:47.774934 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7f34d582-3b55-4d2a-91b3-c64acd57981f","Type":"ContainerStarted","Data":"0e5bfedb51229d23082c09976f4fce08f90019c3b4004edf92d289f74fe6a412"} Feb 16 13:53:47 crc kubenswrapper[4812]: I0216 13:53:47.775280 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7f34d582-3b55-4d2a-91b3-c64acd57981f","Type":"ContainerStarted","Data":"80e8eb9c89ad0b7257e63cb01d0496ebd73b9700daef182806a3e00f3687e499"} Feb 16 13:53:47 crc kubenswrapper[4812]: I0216 13:53:47.775291 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7f34d582-3b55-4d2a-91b3-c64acd57981f","Type":"ContainerStarted","Data":"b49a1cf2bad40583504996e18ddce2fb5117c8f9b4743a0adbaab2cac966fdb1"} Feb 16 13:53:47 crc kubenswrapper[4812]: I0216 13:53:47.777348 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-c7brg" event={"ID":"919aaed2-0230-4b07-aea8-fb57e6917cff","Type":"ContainerStarted","Data":"b24a084ba30bf4d0cccce5cd9061fe696362aadb7b4055cd188b6d410529a579"} Feb 16 13:53:47 crc kubenswrapper[4812]: I0216 13:53:47.805512 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-c7brg" podStartSLOduration=6.067348846 podStartE2EDuration="13.805492299s" podCreationTimestamp="2026-02-16 13:53:34 +0000 UTC" firstStartedPulling="2026-02-16 13:53:38.324143487 +0000 UTC m=+1307.388474188" lastFinishedPulling="2026-02-16 13:53:46.06228694 +0000 UTC m=+1315.126617641" observedRunningTime="2026-02-16 13:53:47.803834501 +0000 UTC m=+1316.868165202" watchObservedRunningTime="2026-02-16 13:53:47.805492299 +0000 UTC m=+1316.869823000" Feb 16 13:53:48 crc kubenswrapper[4812]: I0216 13:53:48.802367 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7f34d582-3b55-4d2a-91b3-c64acd57981f","Type":"ContainerStarted","Data":"d264d22746f7ed2696baf6dae4bc47891c58996f3977c07792dd35f3fd5d6c81"} Feb 16 13:53:48 crc kubenswrapper[4812]: I0216 13:53:48.802953 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7f34d582-3b55-4d2a-91b3-c64acd57981f","Type":"ContainerStarted","Data":"7cfd788fd75968ec1c2034d58804a0bc08a2f7e905ca43b01308b7eb6dcd8dbf"} Feb 16 13:53:48 crc kubenswrapper[4812]: I0216 13:53:48.802967 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7f34d582-3b55-4d2a-91b3-c64acd57981f","Type":"ContainerStarted","Data":"0311d764f54db2f87082b1d5a7325cfeb8bc34550a612f34b67e4eca7b34b31b"} Feb 16 13:53:49 crc kubenswrapper[4812]: I0216 13:53:49.952172 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7f34d582-3b55-4d2a-91b3-c64acd57981f","Type":"ContainerStarted","Data":"03605837ecba2690eb6b38edf38b0c6a6c7de7f53050d408906338f4e90b8cf2"} Feb 16 13:53:50 crc kubenswrapper[4812]: I0216 13:53:50.014511 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=39.308624187 podStartE2EDuration="1m2.014484532s" podCreationTimestamp="2026-02-16 13:52:48 +0000 UTC" firstStartedPulling="2026-02-16 13:53:23.33492275 +0000 UTC m=+1292.399253451" lastFinishedPulling="2026-02-16 13:53:46.040783105 +0000 UTC m=+1315.105113796" observedRunningTime="2026-02-16 13:53:50.010813145 +0000 UTC m=+1319.075143846" watchObservedRunningTime="2026-02-16 13:53:50.014484532 +0000 UTC m=+1319.078815233" Feb 16 13:53:50 crc kubenswrapper[4812]: I0216 13:53:50.962642 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mwzf9" event={"ID":"03e0b815-7641-435c-9934-05f5c5307962","Type":"ContainerStarted","Data":"bde09c54d3755326e46294c9aa3086a0cacb04f9e964ac8d8b7cd14f37f0b309"} Feb 16 13:53:51 crc kubenswrapper[4812]: I0216 13:53:51.127277 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-mwzf9" podStartSLOduration=3.4233160160000002 podStartE2EDuration="34.127242322s" podCreationTimestamp="2026-02-16 13:53:17 +0000 UTC" firstStartedPulling="2026-02-16 13:53:18.615577711 +0000 UTC m=+1287.679908412" lastFinishedPulling="2026-02-16 13:53:49.319504017 +0000 UTC m=+1318.383834718" observedRunningTime="2026-02-16 13:53:51.111902456 +0000 UTC m=+1320.176233157" watchObservedRunningTime="2026-02-16 13:53:51.127242322 +0000 UTC m=+1320.191573023" Feb 16 13:53:51 crc kubenswrapper[4812]: I0216 13:53:51.390513 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-qql9l"] Feb 16 13:53:51 crc kubenswrapper[4812]: E0216 13:53:51.391177 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4445f438-ce8c-4014-ad6f-b892beed381a" containerName="mariadb-database-create" Feb 16 13:53:51 crc kubenswrapper[4812]: I0216 13:53:51.391206 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="4445f438-ce8c-4014-ad6f-b892beed381a" containerName="mariadb-database-create" Feb 16 13:53:51 crc kubenswrapper[4812]: E0216 13:53:51.391225 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49d92a9c-6e64-409e-a324-0061b9b451d0" containerName="mariadb-account-create-update" Feb 16 13:53:51 crc kubenswrapper[4812]: I0216 13:53:51.391237 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="49d92a9c-6e64-409e-a324-0061b9b451d0" containerName="mariadb-account-create-update" Feb 16 13:53:51 crc kubenswrapper[4812]: E0216 13:53:51.391266 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="870350e2-3c24-4788-afb1-8d5a4d77172e" containerName="mariadb-database-create" Feb 16 13:53:51 crc kubenswrapper[4812]: I0216 13:53:51.391275 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="870350e2-3c24-4788-afb1-8d5a4d77172e" containerName="mariadb-database-create" Feb 16 13:53:51 crc kubenswrapper[4812]: E0216 13:53:51.391293 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aee3570-b33b-4898-ad56-62202a1dd25b" containerName="mariadb-account-create-update" Feb 16 13:53:51 crc kubenswrapper[4812]: I0216 13:53:51.391304 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aee3570-b33b-4898-ad56-62202a1dd25b" containerName="mariadb-account-create-update" Feb 16 13:53:51 crc kubenswrapper[4812]: E0216 13:53:51.391325 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="862482a0-fe2f-481c-a819-4539a198dc9d" containerName="mariadb-account-create-update" Feb 16 13:53:51 crc kubenswrapper[4812]: I0216 13:53:51.391339 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="862482a0-fe2f-481c-a819-4539a198dc9d" containerName="mariadb-account-create-update" Feb 16 13:53:51 crc kubenswrapper[4812]: E0216 13:53:51.391366 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a96961c7-e6f3-4cbc-8498-b9e5f023ad2d" containerName="mariadb-database-create" Feb 16 13:53:51 crc kubenswrapper[4812]: I0216 13:53:51.391378 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a96961c7-e6f3-4cbc-8498-b9e5f023ad2d" containerName="mariadb-database-create" Feb 16 13:53:51 crc kubenswrapper[4812]: E0216 13:53:51.391392 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5d00fb2-e93c-4b84-b307-f322137b1be4" containerName="mariadb-database-create" Feb 16 13:53:51 crc kubenswrapper[4812]: I0216 13:53:51.391401 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5d00fb2-e93c-4b84-b307-f322137b1be4" containerName="mariadb-database-create" Feb 16 13:53:51 crc kubenswrapper[4812]: E0216 13:53:51.391418 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fde98520-6555-417b-851c-14dccde518ad" containerName="mariadb-account-create-update" Feb 16 13:53:51 crc kubenswrapper[4812]: I0216 13:53:51.391430 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="fde98520-6555-417b-851c-14dccde518ad" containerName="mariadb-account-create-update" Feb 16 13:53:51 crc kubenswrapper[4812]: E0216 13:53:51.391578 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57dec6e7-2b03-417e-9aca-c535aabeba2a" containerName="ovn-config" Feb 16 13:53:51 crc kubenswrapper[4812]: I0216 13:53:51.391591 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="57dec6e7-2b03-417e-9aca-c535aabeba2a" containerName="ovn-config" Feb 16 13:53:51 crc kubenswrapper[4812]: I0216 13:53:51.391875 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="870350e2-3c24-4788-afb1-8d5a4d77172e" containerName="mariadb-database-create" Feb 16 13:53:51 crc kubenswrapper[4812]: I0216 13:53:51.391923 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5d00fb2-e93c-4b84-b307-f322137b1be4" containerName="mariadb-database-create" Feb 16 13:53:51 crc kubenswrapper[4812]: I0216 13:53:51.391935 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a96961c7-e6f3-4cbc-8498-b9e5f023ad2d" containerName="mariadb-database-create" Feb 16 13:53:51 crc kubenswrapper[4812]: I0216 13:53:51.391952 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="49d92a9c-6e64-409e-a324-0061b9b451d0" containerName="mariadb-account-create-update" Feb 16 13:53:51 crc kubenswrapper[4812]: I0216 13:53:51.391963 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="57dec6e7-2b03-417e-9aca-c535aabeba2a" containerName="ovn-config" Feb 16 13:53:51 crc kubenswrapper[4812]: I0216 13:53:51.391984 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="fde98520-6555-417b-851c-14dccde518ad" containerName="mariadb-account-create-update" Feb 16 13:53:51 crc kubenswrapper[4812]: I0216 13:53:51.391999 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="4445f438-ce8c-4014-ad6f-b892beed381a" containerName="mariadb-database-create" Feb 16 13:53:51 crc kubenswrapper[4812]: I0216 13:53:51.392011 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="862482a0-fe2f-481c-a819-4539a198dc9d" containerName="mariadb-account-create-update" Feb 16 13:53:51 crc kubenswrapper[4812]: I0216 13:53:51.392024 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aee3570-b33b-4898-ad56-62202a1dd25b" containerName="mariadb-account-create-update" Feb 16 13:53:51 crc kubenswrapper[4812]: I0216 13:53:51.393603 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-qql9l" Feb 16 13:53:51 crc kubenswrapper[4812]: I0216 13:53:51.415620 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-qql9l"] Feb 16 13:53:51 crc kubenswrapper[4812]: I0216 13:53:51.428187 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 16 13:53:51 crc kubenswrapper[4812]: I0216 13:53:51.517468 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ceb8fdfb-dd06-417c-91db-9b6843d52984-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-qql9l\" (UID: \"ceb8fdfb-dd06-417c-91db-9b6843d52984\") " pod="openstack/dnsmasq-dns-764c5664d7-qql9l" Feb 16 13:53:51 crc kubenswrapper[4812]: I0216 13:53:51.517675 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ceb8fdfb-dd06-417c-91db-9b6843d52984-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-qql9l\" (UID: \"ceb8fdfb-dd06-417c-91db-9b6843d52984\") " pod="openstack/dnsmasq-dns-764c5664d7-qql9l" Feb 16 13:53:51 crc kubenswrapper[4812]: I0216 13:53:51.517746 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ceb8fdfb-dd06-417c-91db-9b6843d52984-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-qql9l\" (UID: \"ceb8fdfb-dd06-417c-91db-9b6843d52984\") " pod="openstack/dnsmasq-dns-764c5664d7-qql9l" Feb 16 13:53:51 crc kubenswrapper[4812]: I0216 13:53:51.517800 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceb8fdfb-dd06-417c-91db-9b6843d52984-config\") pod \"dnsmasq-dns-764c5664d7-qql9l\" (UID: \"ceb8fdfb-dd06-417c-91db-9b6843d52984\") " pod="openstack/dnsmasq-dns-764c5664d7-qql9l" Feb 16 13:53:51 crc kubenswrapper[4812]: I0216 13:53:51.517902 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ceb8fdfb-dd06-417c-91db-9b6843d52984-dns-svc\") pod \"dnsmasq-dns-764c5664d7-qql9l\" (UID: \"ceb8fdfb-dd06-417c-91db-9b6843d52984\") " pod="openstack/dnsmasq-dns-764c5664d7-qql9l" Feb 16 13:53:51 crc kubenswrapper[4812]: I0216 13:53:51.517956 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qskxm\" (UniqueName: \"kubernetes.io/projected/ceb8fdfb-dd06-417c-91db-9b6843d52984-kube-api-access-qskxm\") pod \"dnsmasq-dns-764c5664d7-qql9l\" (UID: \"ceb8fdfb-dd06-417c-91db-9b6843d52984\") " pod="openstack/dnsmasq-dns-764c5664d7-qql9l" Feb 16 13:53:51 crc kubenswrapper[4812]: I0216 13:53:51.620612 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ceb8fdfb-dd06-417c-91db-9b6843d52984-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-qql9l\" (UID: \"ceb8fdfb-dd06-417c-91db-9b6843d52984\") " pod="openstack/dnsmasq-dns-764c5664d7-qql9l" Feb 16 13:53:51 crc kubenswrapper[4812]: I0216 13:53:51.621093 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceb8fdfb-dd06-417c-91db-9b6843d52984-config\") pod \"dnsmasq-dns-764c5664d7-qql9l\" (UID: \"ceb8fdfb-dd06-417c-91db-9b6843d52984\") " pod="openstack/dnsmasq-dns-764c5664d7-qql9l" Feb 16 13:53:51 crc kubenswrapper[4812]: I0216 13:53:51.621181 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ceb8fdfb-dd06-417c-91db-9b6843d52984-dns-svc\") pod \"dnsmasq-dns-764c5664d7-qql9l\" (UID: \"ceb8fdfb-dd06-417c-91db-9b6843d52984\") " pod="openstack/dnsmasq-dns-764c5664d7-qql9l" Feb 16 13:53:51 crc kubenswrapper[4812]: I0216 13:53:51.621226 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qskxm\" (UniqueName: \"kubernetes.io/projected/ceb8fdfb-dd06-417c-91db-9b6843d52984-kube-api-access-qskxm\") pod \"dnsmasq-dns-764c5664d7-qql9l\" (UID: \"ceb8fdfb-dd06-417c-91db-9b6843d52984\") " pod="openstack/dnsmasq-dns-764c5664d7-qql9l" Feb 16 13:53:51 crc kubenswrapper[4812]: I0216 13:53:51.621339 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ceb8fdfb-dd06-417c-91db-9b6843d52984-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-qql9l\" (UID: \"ceb8fdfb-dd06-417c-91db-9b6843d52984\") " pod="openstack/dnsmasq-dns-764c5664d7-qql9l" Feb 16 13:53:51 crc kubenswrapper[4812]: I0216 13:53:51.621428 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ceb8fdfb-dd06-417c-91db-9b6843d52984-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-qql9l\" (UID: \"ceb8fdfb-dd06-417c-91db-9b6843d52984\") " pod="openstack/dnsmasq-dns-764c5664d7-qql9l" Feb 16 13:53:51 crc kubenswrapper[4812]: I0216 13:53:51.621860 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ceb8fdfb-dd06-417c-91db-9b6843d52984-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-qql9l\" (UID: \"ceb8fdfb-dd06-417c-91db-9b6843d52984\") " pod="openstack/dnsmasq-dns-764c5664d7-qql9l" Feb 16 13:53:51 crc kubenswrapper[4812]: I0216 13:53:51.622341 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ceb8fdfb-dd06-417c-91db-9b6843d52984-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-qql9l\" (UID: \"ceb8fdfb-dd06-417c-91db-9b6843d52984\") " pod="openstack/dnsmasq-dns-764c5664d7-qql9l" Feb 16 13:53:51 crc kubenswrapper[4812]: I0216 13:53:51.622688 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ceb8fdfb-dd06-417c-91db-9b6843d52984-dns-svc\") pod \"dnsmasq-dns-764c5664d7-qql9l\" (UID: \"ceb8fdfb-dd06-417c-91db-9b6843d52984\") " pod="openstack/dnsmasq-dns-764c5664d7-qql9l" Feb 16 13:53:51 crc kubenswrapper[4812]: I0216 13:53:51.623370 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ceb8fdfb-dd06-417c-91db-9b6843d52984-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-qql9l\" (UID: \"ceb8fdfb-dd06-417c-91db-9b6843d52984\") " pod="openstack/dnsmasq-dns-764c5664d7-qql9l" Feb 16 13:53:51 crc kubenswrapper[4812]: I0216 13:53:51.623411 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceb8fdfb-dd06-417c-91db-9b6843d52984-config\") pod \"dnsmasq-dns-764c5664d7-qql9l\" (UID: \"ceb8fdfb-dd06-417c-91db-9b6843d52984\") " pod="openstack/dnsmasq-dns-764c5664d7-qql9l" Feb 16 13:53:51 crc kubenswrapper[4812]: I0216 13:53:51.656894 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qskxm\" (UniqueName: \"kubernetes.io/projected/ceb8fdfb-dd06-417c-91db-9b6843d52984-kube-api-access-qskxm\") pod \"dnsmasq-dns-764c5664d7-qql9l\" (UID: \"ceb8fdfb-dd06-417c-91db-9b6843d52984\") " pod="openstack/dnsmasq-dns-764c5664d7-qql9l" Feb 16 13:53:51 crc kubenswrapper[4812]: I0216 13:53:51.760714 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-qql9l" Feb 16 13:53:53 crc kubenswrapper[4812]: I0216 13:53:53.022611 4812 generic.go:334] "Generic (PLEG): container finished" podID="919aaed2-0230-4b07-aea8-fb57e6917cff" containerID="b24a084ba30bf4d0cccce5cd9061fe696362aadb7b4055cd188b6d410529a579" exitCode=0 Feb 16 13:53:53 crc kubenswrapper[4812]: I0216 13:53:53.022962 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-c7brg" event={"ID":"919aaed2-0230-4b07-aea8-fb57e6917cff","Type":"ContainerDied","Data":"b24a084ba30bf4d0cccce5cd9061fe696362aadb7b4055cd188b6d410529a579"} Feb 16 13:53:57 crc kubenswrapper[4812]: I0216 13:53:57.159132 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-c7brg" event={"ID":"919aaed2-0230-4b07-aea8-fb57e6917cff","Type":"ContainerDied","Data":"500986678d16a03a2901441aff2e36d9b11c4927d07d57bb2f3ce9a3dbfba439"} Feb 16 13:53:57 crc kubenswrapper[4812]: I0216 13:53:57.160382 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="500986678d16a03a2901441aff2e36d9b11c4927d07d57bb2f3ce9a3dbfba439" Feb 16 13:53:57 crc kubenswrapper[4812]: I0216 13:53:57.265134 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-c7brg" Feb 16 13:53:57 crc kubenswrapper[4812]: I0216 13:53:57.425195 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/919aaed2-0230-4b07-aea8-fb57e6917cff-config-data\") pod \"919aaed2-0230-4b07-aea8-fb57e6917cff\" (UID: \"919aaed2-0230-4b07-aea8-fb57e6917cff\") " Feb 16 13:53:57 crc kubenswrapper[4812]: I0216 13:53:57.425293 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/919aaed2-0230-4b07-aea8-fb57e6917cff-combined-ca-bundle\") pod \"919aaed2-0230-4b07-aea8-fb57e6917cff\" (UID: \"919aaed2-0230-4b07-aea8-fb57e6917cff\") " Feb 16 13:53:57 crc kubenswrapper[4812]: I0216 13:53:57.425507 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7s5k\" (UniqueName: \"kubernetes.io/projected/919aaed2-0230-4b07-aea8-fb57e6917cff-kube-api-access-k7s5k\") pod \"919aaed2-0230-4b07-aea8-fb57e6917cff\" (UID: \"919aaed2-0230-4b07-aea8-fb57e6917cff\") " Feb 16 13:53:57 crc kubenswrapper[4812]: I0216 13:53:57.538117 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/919aaed2-0230-4b07-aea8-fb57e6917cff-kube-api-access-k7s5k" (OuterVolumeSpecName: "kube-api-access-k7s5k") pod "919aaed2-0230-4b07-aea8-fb57e6917cff" (UID: "919aaed2-0230-4b07-aea8-fb57e6917cff"). InnerVolumeSpecName "kube-api-access-k7s5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:53:57 crc kubenswrapper[4812]: I0216 13:53:57.558947 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/919aaed2-0230-4b07-aea8-fb57e6917cff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "919aaed2-0230-4b07-aea8-fb57e6917cff" (UID: "919aaed2-0230-4b07-aea8-fb57e6917cff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:53:57 crc kubenswrapper[4812]: I0216 13:53:57.637598 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7s5k\" (UniqueName: \"kubernetes.io/projected/919aaed2-0230-4b07-aea8-fb57e6917cff-kube-api-access-k7s5k\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:57 crc kubenswrapper[4812]: I0216 13:53:57.637969 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/919aaed2-0230-4b07-aea8-fb57e6917cff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:57 crc kubenswrapper[4812]: I0216 13:53:57.654192 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/919aaed2-0230-4b07-aea8-fb57e6917cff-config-data" (OuterVolumeSpecName: "config-data") pod "919aaed2-0230-4b07-aea8-fb57e6917cff" (UID: "919aaed2-0230-4b07-aea8-fb57e6917cff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:53:57 crc kubenswrapper[4812]: I0216 13:53:57.740626 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/919aaed2-0230-4b07-aea8-fb57e6917cff-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:53:57 crc kubenswrapper[4812]: I0216 13:53:57.897908 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-qql9l"] Feb 16 13:53:58 crc kubenswrapper[4812]: I0216 13:53:58.175923 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e02a9868-e12c-4a65-9ba5-4a5965131b5b","Type":"ContainerStarted","Data":"b3c8d79bb1d51b82d94578928b328f7fde590b268cc014d9eda7fcd30ce8654f"} Feb 16 13:53:58 crc kubenswrapper[4812]: I0216 13:53:58.177966 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-qql9l" event={"ID":"ceb8fdfb-dd06-417c-91db-9b6843d52984","Type":"ContainerStarted","Data":"20a770c95407522eabbd85d836f5df567ff72b9a797b7fee694974ee33b51634"} Feb 16 13:53:58 crc kubenswrapper[4812]: I0216 13:53:58.177986 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-c7brg" Feb 16 13:53:59 crc kubenswrapper[4812]: I0216 13:53:59.050182 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-khzht"] Feb 16 13:53:59 crc kubenswrapper[4812]: E0216 13:53:59.058205 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="919aaed2-0230-4b07-aea8-fb57e6917cff" containerName="keystone-db-sync" Feb 16 13:53:59 crc kubenswrapper[4812]: I0216 13:53:59.058245 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="919aaed2-0230-4b07-aea8-fb57e6917cff" containerName="keystone-db-sync" Feb 16 13:53:59 crc kubenswrapper[4812]: I0216 13:53:59.058554 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="919aaed2-0230-4b07-aea8-fb57e6917cff" containerName="keystone-db-sync" Feb 16 13:53:59 crc kubenswrapper[4812]: I0216 13:53:59.059765 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-khzht" Feb 16 13:53:59 crc kubenswrapper[4812]: I0216 13:53:59.068573 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 13:53:59 crc kubenswrapper[4812]: I0216 13:53:59.068580 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 16 13:53:59 crc kubenswrapper[4812]: I0216 13:53:59.068809 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 13:53:59 crc kubenswrapper[4812]: I0216 13:53:59.069163 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-vtcqc" Feb 16 13:53:59 crc kubenswrapper[4812]: I0216 13:53:59.074860 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 13:53:59 crc kubenswrapper[4812]: I0216 13:53:59.091238 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9e9a8b01-3875-4489-8598-377dfdac550f-credential-keys\") pod \"keystone-bootstrap-khzht\" (UID: \"9e9a8b01-3875-4489-8598-377dfdac550f\") " pod="openstack/keystone-bootstrap-khzht" Feb 16 13:53:59 crc kubenswrapper[4812]: I0216 13:53:59.091381 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9e9a8b01-3875-4489-8598-377dfdac550f-fernet-keys\") pod \"keystone-bootstrap-khzht\" (UID: \"9e9a8b01-3875-4489-8598-377dfdac550f\") " pod="openstack/keystone-bootstrap-khzht" Feb 16 13:53:59 crc kubenswrapper[4812]: I0216 13:53:59.091572 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e9a8b01-3875-4489-8598-377dfdac550f-combined-ca-bundle\") pod \"keystone-bootstrap-khzht\" (UID: \"9e9a8b01-3875-4489-8598-377dfdac550f\") " pod="openstack/keystone-bootstrap-khzht" Feb 16 13:53:59 crc kubenswrapper[4812]: I0216 13:53:59.091630 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjczf\" (UniqueName: \"kubernetes.io/projected/9e9a8b01-3875-4489-8598-377dfdac550f-kube-api-access-hjczf\") pod \"keystone-bootstrap-khzht\" (UID: \"9e9a8b01-3875-4489-8598-377dfdac550f\") " pod="openstack/keystone-bootstrap-khzht" Feb 16 13:53:59 crc kubenswrapper[4812]: I0216 13:53:59.091662 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e9a8b01-3875-4489-8598-377dfdac550f-config-data\") pod \"keystone-bootstrap-khzht\" (UID: \"9e9a8b01-3875-4489-8598-377dfdac550f\") " pod="openstack/keystone-bootstrap-khzht" Feb 16 13:53:59 crc kubenswrapper[4812]: I0216 13:53:59.091683 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e9a8b01-3875-4489-8598-377dfdac550f-scripts\") pod \"keystone-bootstrap-khzht\" (UID: \"9e9a8b01-3875-4489-8598-377dfdac550f\") " pod="openstack/keystone-bootstrap-khzht" Feb 16 13:53:59 crc kubenswrapper[4812]: I0216 13:53:59.099318 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-khzht"] Feb 16 13:53:59 crc kubenswrapper[4812]: I0216 13:53:59.194479 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9e9a8b01-3875-4489-8598-377dfdac550f-fernet-keys\") pod \"keystone-bootstrap-khzht\" (UID: \"9e9a8b01-3875-4489-8598-377dfdac550f\") " pod="openstack/keystone-bootstrap-khzht" Feb 16 13:53:59 crc kubenswrapper[4812]: I0216 13:53:59.194630 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e9a8b01-3875-4489-8598-377dfdac550f-combined-ca-bundle\") pod \"keystone-bootstrap-khzht\" (UID: \"9e9a8b01-3875-4489-8598-377dfdac550f\") " pod="openstack/keystone-bootstrap-khzht" Feb 16 13:53:59 crc kubenswrapper[4812]: I0216 13:53:59.194680 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjczf\" (UniqueName: \"kubernetes.io/projected/9e9a8b01-3875-4489-8598-377dfdac550f-kube-api-access-hjczf\") pod \"keystone-bootstrap-khzht\" (UID: \"9e9a8b01-3875-4489-8598-377dfdac550f\") " pod="openstack/keystone-bootstrap-khzht" Feb 16 13:53:59 crc kubenswrapper[4812]: I0216 13:53:59.194711 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e9a8b01-3875-4489-8598-377dfdac550f-config-data\") pod \"keystone-bootstrap-khzht\" (UID: \"9e9a8b01-3875-4489-8598-377dfdac550f\") " pod="openstack/keystone-bootstrap-khzht" Feb 16 13:53:59 crc kubenswrapper[4812]: I0216 13:53:59.194743 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e9a8b01-3875-4489-8598-377dfdac550f-scripts\") pod \"keystone-bootstrap-khzht\" (UID: \"9e9a8b01-3875-4489-8598-377dfdac550f\") " pod="openstack/keystone-bootstrap-khzht" Feb 16 13:53:59 crc kubenswrapper[4812]: I0216 13:53:59.194827 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9e9a8b01-3875-4489-8598-377dfdac550f-credential-keys\") pod \"keystone-bootstrap-khzht\" (UID: \"9e9a8b01-3875-4489-8598-377dfdac550f\") " pod="openstack/keystone-bootstrap-khzht" Feb 16 13:53:59 crc kubenswrapper[4812]: I0216 13:53:59.208084 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e9a8b01-3875-4489-8598-377dfdac550f-scripts\") pod \"keystone-bootstrap-khzht\" (UID: \"9e9a8b01-3875-4489-8598-377dfdac550f\") " pod="openstack/keystone-bootstrap-khzht" Feb 16 13:53:59 crc kubenswrapper[4812]: I0216 13:53:59.215099 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e9a8b01-3875-4489-8598-377dfdac550f-config-data\") pod \"keystone-bootstrap-khzht\" (UID: \"9e9a8b01-3875-4489-8598-377dfdac550f\") " pod="openstack/keystone-bootstrap-khzht" Feb 16 13:53:59 crc kubenswrapper[4812]: I0216 13:53:59.221304 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e9a8b01-3875-4489-8598-377dfdac550f-combined-ca-bundle\") pod \"keystone-bootstrap-khzht\" (UID: \"9e9a8b01-3875-4489-8598-377dfdac550f\") " pod="openstack/keystone-bootstrap-khzht" Feb 16 13:53:59 crc kubenswrapper[4812]: I0216 13:53:59.221841 4812 generic.go:334] "Generic (PLEG): container finished" podID="ceb8fdfb-dd06-417c-91db-9b6843d52984" containerID="5d2ac037a91f96dfa4056d7f1ce34eae113ecfef5454086c4efc2ed0d015624c" exitCode=0 Feb 16 13:53:59 crc kubenswrapper[4812]: I0216 13:53:59.221935 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-qql9l" event={"ID":"ceb8fdfb-dd06-417c-91db-9b6843d52984","Type":"ContainerDied","Data":"5d2ac037a91f96dfa4056d7f1ce34eae113ecfef5454086c4efc2ed0d015624c"} Feb 16 13:53:59 crc kubenswrapper[4812]: I0216 13:53:59.229612 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9e9a8b01-3875-4489-8598-377dfdac550f-fernet-keys\") pod \"keystone-bootstrap-khzht\" (UID: \"9e9a8b01-3875-4489-8598-377dfdac550f\") " pod="openstack/keystone-bootstrap-khzht" Feb 16 13:53:59 crc kubenswrapper[4812]: I0216 13:53:59.242475 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9e9a8b01-3875-4489-8598-377dfdac550f-credential-keys\") pod \"keystone-bootstrap-khzht\" (UID: \"9e9a8b01-3875-4489-8598-377dfdac550f\") " pod="openstack/keystone-bootstrap-khzht" Feb 16 13:53:59 crc kubenswrapper[4812]: I0216 13:53:59.302523 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-qql9l"] Feb 16 13:53:59 crc kubenswrapper[4812]: I0216 13:53:59.312620 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjczf\" (UniqueName: \"kubernetes.io/projected/9e9a8b01-3875-4489-8598-377dfdac550f-kube-api-access-hjczf\") pod \"keystone-bootstrap-khzht\" (UID: \"9e9a8b01-3875-4489-8598-377dfdac550f\") " pod="openstack/keystone-bootstrap-khzht" Feb 16 13:53:59 crc kubenswrapper[4812]: I0216 13:53:59.394909 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-khzht" Feb 16 13:53:59 crc kubenswrapper[4812]: I0216 13:53:59.819255 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-vmrr6"] Feb 16 13:53:59 crc kubenswrapper[4812]: I0216 13:53:59.820861 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-vmrr6" Feb 16 13:53:59 crc kubenswrapper[4812]: I0216 13:53:59.924607 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skhxd\" (UniqueName: \"kubernetes.io/projected/c5f831e6-8de4-498e-9337-0e9c274c2af6-kube-api-access-skhxd\") pod \"dnsmasq-dns-5959f8865f-vmrr6\" (UID: \"c5f831e6-8de4-498e-9337-0e9c274c2af6\") " pod="openstack/dnsmasq-dns-5959f8865f-vmrr6" Feb 16 13:53:59 crc kubenswrapper[4812]: I0216 13:53:59.924807 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5f831e6-8de4-498e-9337-0e9c274c2af6-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-vmrr6\" (UID: \"c5f831e6-8de4-498e-9337-0e9c274c2af6\") " pod="openstack/dnsmasq-dns-5959f8865f-vmrr6" Feb 16 13:53:59 crc kubenswrapper[4812]: I0216 13:53:59.924845 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5f831e6-8de4-498e-9337-0e9c274c2af6-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-vmrr6\" (UID: \"c5f831e6-8de4-498e-9337-0e9c274c2af6\") " pod="openstack/dnsmasq-dns-5959f8865f-vmrr6" Feb 16 13:53:59 crc kubenswrapper[4812]: I0216 13:53:59.925094 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5f831e6-8de4-498e-9337-0e9c274c2af6-dns-svc\") pod \"dnsmasq-dns-5959f8865f-vmrr6\" (UID: \"c5f831e6-8de4-498e-9337-0e9c274c2af6\") " pod="openstack/dnsmasq-dns-5959f8865f-vmrr6" Feb 16 13:53:59 crc kubenswrapper[4812]: I0216 13:53:59.925141 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f831e6-8de4-498e-9337-0e9c274c2af6-config\") pod \"dnsmasq-dns-5959f8865f-vmrr6\" (UID: \"c5f831e6-8de4-498e-9337-0e9c274c2af6\") " pod="openstack/dnsmasq-dns-5959f8865f-vmrr6" Feb 16 13:53:59 crc kubenswrapper[4812]: I0216 13:53:59.925359 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5f831e6-8de4-498e-9337-0e9c274c2af6-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-vmrr6\" (UID: \"c5f831e6-8de4-498e-9337-0e9c274c2af6\") " pod="openstack/dnsmasq-dns-5959f8865f-vmrr6" Feb 16 13:53:59 crc kubenswrapper[4812]: I0216 13:53:59.951983 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-vmrr6"] Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.026673 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5f831e6-8de4-498e-9337-0e9c274c2af6-dns-svc\") pod \"dnsmasq-dns-5959f8865f-vmrr6\" (UID: \"c5f831e6-8de4-498e-9337-0e9c274c2af6\") " pod="openstack/dnsmasq-dns-5959f8865f-vmrr6" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.026737 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f831e6-8de4-498e-9337-0e9c274c2af6-config\") pod \"dnsmasq-dns-5959f8865f-vmrr6\" (UID: \"c5f831e6-8de4-498e-9337-0e9c274c2af6\") " pod="openstack/dnsmasq-dns-5959f8865f-vmrr6" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.026817 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5f831e6-8de4-498e-9337-0e9c274c2af6-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-vmrr6\" (UID: \"c5f831e6-8de4-498e-9337-0e9c274c2af6\") " pod="openstack/dnsmasq-dns-5959f8865f-vmrr6" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.026850 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skhxd\" (UniqueName: \"kubernetes.io/projected/c5f831e6-8de4-498e-9337-0e9c274c2af6-kube-api-access-skhxd\") pod \"dnsmasq-dns-5959f8865f-vmrr6\" (UID: \"c5f831e6-8de4-498e-9337-0e9c274c2af6\") " pod="openstack/dnsmasq-dns-5959f8865f-vmrr6" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.026933 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5f831e6-8de4-498e-9337-0e9c274c2af6-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-vmrr6\" (UID: \"c5f831e6-8de4-498e-9337-0e9c274c2af6\") " pod="openstack/dnsmasq-dns-5959f8865f-vmrr6" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.026963 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5f831e6-8de4-498e-9337-0e9c274c2af6-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-vmrr6\" (UID: \"c5f831e6-8de4-498e-9337-0e9c274c2af6\") " pod="openstack/dnsmasq-dns-5959f8865f-vmrr6" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.028201 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5f831e6-8de4-498e-9337-0e9c274c2af6-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-vmrr6\" (UID: \"c5f831e6-8de4-498e-9337-0e9c274c2af6\") " pod="openstack/dnsmasq-dns-5959f8865f-vmrr6" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.029583 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5f831e6-8de4-498e-9337-0e9c274c2af6-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-vmrr6\" (UID: \"c5f831e6-8de4-498e-9337-0e9c274c2af6\") " pod="openstack/dnsmasq-dns-5959f8865f-vmrr6" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.053353 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5f831e6-8de4-498e-9337-0e9c274c2af6-dns-svc\") pod \"dnsmasq-dns-5959f8865f-vmrr6\" (UID: \"c5f831e6-8de4-498e-9337-0e9c274c2af6\") " pod="openstack/dnsmasq-dns-5959f8865f-vmrr6" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.058572 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f831e6-8de4-498e-9337-0e9c274c2af6-config\") pod \"dnsmasq-dns-5959f8865f-vmrr6\" (UID: \"c5f831e6-8de4-498e-9337-0e9c274c2af6\") " pod="openstack/dnsmasq-dns-5959f8865f-vmrr6" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.063353 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5f831e6-8de4-498e-9337-0e9c274c2af6-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-vmrr6\" (UID: \"c5f831e6-8de4-498e-9337-0e9c274c2af6\") " pod="openstack/dnsmasq-dns-5959f8865f-vmrr6" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.101382 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skhxd\" (UniqueName: \"kubernetes.io/projected/c5f831e6-8de4-498e-9337-0e9c274c2af6-kube-api-access-skhxd\") pod \"dnsmasq-dns-5959f8865f-vmrr6\" (UID: \"c5f831e6-8de4-498e-9337-0e9c274c2af6\") " pod="openstack/dnsmasq-dns-5959f8865f-vmrr6" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.113545 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-vmrr6" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.622789 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-6q6x6"] Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.626561 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6q6x6" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.644902 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.644954 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.655616 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-qj2kj"] Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.659205 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qj2kj" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.659233 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-w28r7" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.662461 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-fxw8m" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.666143 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.666636 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.681049 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-d76qk"] Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.715602 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-d76qk" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.731120 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-pp6lr" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.731395 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.769275 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d9d0140e-e353-40a3-8970-5007408f4cb8-db-sync-config-data\") pod \"cinder-db-sync-qj2kj\" (UID: \"d9d0140e-e353-40a3-8970-5007408f4cb8\") " pod="openstack/cinder-db-sync-qj2kj" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.769378 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a35f33f0-33ff-4938-b15a-455a830ac631-config\") pod \"neutron-db-sync-6q6x6\" (UID: \"a35f33f0-33ff-4938-b15a-455a830ac631\") " pod="openstack/neutron-db-sync-6q6x6" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.769438 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9d0140e-e353-40a3-8970-5007408f4cb8-combined-ca-bundle\") pod \"cinder-db-sync-qj2kj\" (UID: \"d9d0140e-e353-40a3-8970-5007408f4cb8\") " pod="openstack/cinder-db-sync-qj2kj" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.769482 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a35f33f0-33ff-4938-b15a-455a830ac631-combined-ca-bundle\") pod \"neutron-db-sync-6q6x6\" (UID: \"a35f33f0-33ff-4938-b15a-455a830ac631\") " pod="openstack/neutron-db-sync-6q6x6" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.769514 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtf4b\" (UniqueName: \"kubernetes.io/projected/a35f33f0-33ff-4938-b15a-455a830ac631-kube-api-access-rtf4b\") pod \"neutron-db-sync-6q6x6\" (UID: \"a35f33f0-33ff-4938-b15a-455a830ac631\") " pod="openstack/neutron-db-sync-6q6x6" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.769532 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9d0140e-e353-40a3-8970-5007408f4cb8-scripts\") pod \"cinder-db-sync-qj2kj\" (UID: \"d9d0140e-e353-40a3-8970-5007408f4cb8\") " pod="openstack/cinder-db-sync-qj2kj" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.769561 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3e61e08-7ed1-43ed-a137-910b10e85e36-config-data\") pod \"placement-db-sync-d76qk\" (UID: \"b3e61e08-7ed1-43ed-a137-910b10e85e36\") " pod="openstack/placement-db-sync-d76qk" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.769599 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3e61e08-7ed1-43ed-a137-910b10e85e36-logs\") pod \"placement-db-sync-d76qk\" (UID: \"b3e61e08-7ed1-43ed-a137-910b10e85e36\") " pod="openstack/placement-db-sync-d76qk" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.769636 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3e61e08-7ed1-43ed-a137-910b10e85e36-scripts\") pod \"placement-db-sync-d76qk\" (UID: \"b3e61e08-7ed1-43ed-a137-910b10e85e36\") " pod="openstack/placement-db-sync-d76qk" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.769714 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qxxk\" (UniqueName: \"kubernetes.io/projected/d9d0140e-e353-40a3-8970-5007408f4cb8-kube-api-access-6qxxk\") pod \"cinder-db-sync-qj2kj\" (UID: \"d9d0140e-e353-40a3-8970-5007408f4cb8\") " pod="openstack/cinder-db-sync-qj2kj" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.769789 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d9d0140e-e353-40a3-8970-5007408f4cb8-etc-machine-id\") pod \"cinder-db-sync-qj2kj\" (UID: \"d9d0140e-e353-40a3-8970-5007408f4cb8\") " pod="openstack/cinder-db-sync-qj2kj" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.769813 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3e61e08-7ed1-43ed-a137-910b10e85e36-combined-ca-bundle\") pod \"placement-db-sync-d76qk\" (UID: \"b3e61e08-7ed1-43ed-a137-910b10e85e36\") " pod="openstack/placement-db-sync-d76qk" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.769840 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9d0140e-e353-40a3-8970-5007408f4cb8-config-data\") pod \"cinder-db-sync-qj2kj\" (UID: \"d9d0140e-e353-40a3-8970-5007408f4cb8\") " pod="openstack/cinder-db-sync-qj2kj" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.769868 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42pvs\" (UniqueName: \"kubernetes.io/projected/b3e61e08-7ed1-43ed-a137-910b10e85e36-kube-api-access-42pvs\") pod \"placement-db-sync-d76qk\" (UID: \"b3e61e08-7ed1-43ed-a137-910b10e85e36\") " pod="openstack/placement-db-sync-d76qk" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.774607 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.785635 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-p4bgr"] Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.798848 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-p4bgr" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.810222 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-qhrnq" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.810562 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.826214 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-sync-krnzs"] Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.837810 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-krnzs" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.842183 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.853808 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.854121 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.864007 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-97z2b" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.881714 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-6q6x6"] Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.883271 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9d0140e-e353-40a3-8970-5007408f4cb8-config-data\") pod \"cinder-db-sync-qj2kj\" (UID: \"d9d0140e-e353-40a3-8970-5007408f4cb8\") " pod="openstack/cinder-db-sync-qj2kj" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.883319 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42pvs\" (UniqueName: \"kubernetes.io/projected/b3e61e08-7ed1-43ed-a137-910b10e85e36-kube-api-access-42pvs\") pod \"placement-db-sync-d76qk\" (UID: \"b3e61e08-7ed1-43ed-a137-910b10e85e36\") " pod="openstack/placement-db-sync-d76qk" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.883374 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d9d0140e-e353-40a3-8970-5007408f4cb8-db-sync-config-data\") pod \"cinder-db-sync-qj2kj\" (UID: \"d9d0140e-e353-40a3-8970-5007408f4cb8\") " pod="openstack/cinder-db-sync-qj2kj" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.883405 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd76f722-eb61-4676-9456-9a9bb443ef16-combined-ca-bundle\") pod \"barbican-db-sync-p4bgr\" (UID: \"dd76f722-eb61-4676-9456-9a9bb443ef16\") " pod="openstack/barbican-db-sync-p4bgr" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.883483 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a35f33f0-33ff-4938-b15a-455a830ac631-config\") pod \"neutron-db-sync-6q6x6\" (UID: \"a35f33f0-33ff-4938-b15a-455a830ac631\") " pod="openstack/neutron-db-sync-6q6x6" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.883531 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9d0140e-e353-40a3-8970-5007408f4cb8-combined-ca-bundle\") pod \"cinder-db-sync-qj2kj\" (UID: \"d9d0140e-e353-40a3-8970-5007408f4cb8\") " pod="openstack/cinder-db-sync-qj2kj" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.883557 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a35f33f0-33ff-4938-b15a-455a830ac631-combined-ca-bundle\") pod \"neutron-db-sync-6q6x6\" (UID: \"a35f33f0-33ff-4938-b15a-455a830ac631\") " pod="openstack/neutron-db-sync-6q6x6" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.883590 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtf4b\" (UniqueName: \"kubernetes.io/projected/a35f33f0-33ff-4938-b15a-455a830ac631-kube-api-access-rtf4b\") pod \"neutron-db-sync-6q6x6\" (UID: \"a35f33f0-33ff-4938-b15a-455a830ac631\") " pod="openstack/neutron-db-sync-6q6x6" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.883611 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9d0140e-e353-40a3-8970-5007408f4cb8-scripts\") pod \"cinder-db-sync-qj2kj\" (UID: \"d9d0140e-e353-40a3-8970-5007408f4cb8\") " pod="openstack/cinder-db-sync-qj2kj" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.883640 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3e61e08-7ed1-43ed-a137-910b10e85e36-config-data\") pod \"placement-db-sync-d76qk\" (UID: \"b3e61e08-7ed1-43ed-a137-910b10e85e36\") " pod="openstack/placement-db-sync-d76qk" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.883670 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3e61e08-7ed1-43ed-a137-910b10e85e36-logs\") pod \"placement-db-sync-d76qk\" (UID: \"b3e61e08-7ed1-43ed-a137-910b10e85e36\") " pod="openstack/placement-db-sync-d76qk" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.883703 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3e61e08-7ed1-43ed-a137-910b10e85e36-scripts\") pod \"placement-db-sync-d76qk\" (UID: \"b3e61e08-7ed1-43ed-a137-910b10e85e36\") " pod="openstack/placement-db-sync-d76qk" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.883769 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz7gm\" (UniqueName: \"kubernetes.io/projected/dd76f722-eb61-4676-9456-9a9bb443ef16-kube-api-access-qz7gm\") pod \"barbican-db-sync-p4bgr\" (UID: \"dd76f722-eb61-4676-9456-9a9bb443ef16\") " pod="openstack/barbican-db-sync-p4bgr" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.883800 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qxxk\" (UniqueName: \"kubernetes.io/projected/d9d0140e-e353-40a3-8970-5007408f4cb8-kube-api-access-6qxxk\") pod \"cinder-db-sync-qj2kj\" (UID: \"d9d0140e-e353-40a3-8970-5007408f4cb8\") " pod="openstack/cinder-db-sync-qj2kj" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.883843 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d9d0140e-e353-40a3-8970-5007408f4cb8-etc-machine-id\") pod \"cinder-db-sync-qj2kj\" (UID: \"d9d0140e-e353-40a3-8970-5007408f4cb8\") " pod="openstack/cinder-db-sync-qj2kj" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.883867 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3e61e08-7ed1-43ed-a137-910b10e85e36-combined-ca-bundle\") pod \"placement-db-sync-d76qk\" (UID: \"b3e61e08-7ed1-43ed-a137-910b10e85e36\") " pod="openstack/placement-db-sync-d76qk" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.883891 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dd76f722-eb61-4676-9456-9a9bb443ef16-db-sync-config-data\") pod \"barbican-db-sync-p4bgr\" (UID: \"dd76f722-eb61-4676-9456-9a9bb443ef16\") " pod="openstack/barbican-db-sync-p4bgr" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.897115 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d9d0140e-e353-40a3-8970-5007408f4cb8-etc-machine-id\") pod \"cinder-db-sync-qj2kj\" (UID: \"d9d0140e-e353-40a3-8970-5007408f4cb8\") " pod="openstack/cinder-db-sync-qj2kj" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.899219 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a35f33f0-33ff-4938-b15a-455a830ac631-combined-ca-bundle\") pod \"neutron-db-sync-6q6x6\" (UID: \"a35f33f0-33ff-4938-b15a-455a830ac631\") " pod="openstack/neutron-db-sync-6q6x6" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.902054 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9d0140e-e353-40a3-8970-5007408f4cb8-scripts\") pod \"cinder-db-sync-qj2kj\" (UID: \"d9d0140e-e353-40a3-8970-5007408f4cb8\") " pod="openstack/cinder-db-sync-qj2kj" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.902112 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-qj2kj"] Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.902676 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3e61e08-7ed1-43ed-a137-910b10e85e36-logs\") pod \"placement-db-sync-d76qk\" (UID: \"b3e61e08-7ed1-43ed-a137-910b10e85e36\") " pod="openstack/placement-db-sync-d76qk" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.904233 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3e61e08-7ed1-43ed-a137-910b10e85e36-config-data\") pod \"placement-db-sync-d76qk\" (UID: \"b3e61e08-7ed1-43ed-a137-910b10e85e36\") " pod="openstack/placement-db-sync-d76qk" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.904791 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3e61e08-7ed1-43ed-a137-910b10e85e36-scripts\") pod \"placement-db-sync-d76qk\" (UID: \"b3e61e08-7ed1-43ed-a137-910b10e85e36\") " pod="openstack/placement-db-sync-d76qk" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.910105 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a35f33f0-33ff-4938-b15a-455a830ac631-config\") pod \"neutron-db-sync-6q6x6\" (UID: \"a35f33f0-33ff-4938-b15a-455a830ac631\") " pod="openstack/neutron-db-sync-6q6x6" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.927338 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d9d0140e-e353-40a3-8970-5007408f4cb8-db-sync-config-data\") pod \"cinder-db-sync-qj2kj\" (UID: \"d9d0140e-e353-40a3-8970-5007408f4cb8\") " pod="openstack/cinder-db-sync-qj2kj" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.930406 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-d76qk"] Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.931262 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtf4b\" (UniqueName: \"kubernetes.io/projected/a35f33f0-33ff-4938-b15a-455a830ac631-kube-api-access-rtf4b\") pod \"neutron-db-sync-6q6x6\" (UID: \"a35f33f0-33ff-4938-b15a-455a830ac631\") " pod="openstack/neutron-db-sync-6q6x6" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.938571 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-krnzs"] Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.948199 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42pvs\" (UniqueName: \"kubernetes.io/projected/b3e61e08-7ed1-43ed-a137-910b10e85e36-kube-api-access-42pvs\") pod \"placement-db-sync-d76qk\" (UID: \"b3e61e08-7ed1-43ed-a137-910b10e85e36\") " pod="openstack/placement-db-sync-d76qk" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.949363 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9d0140e-e353-40a3-8970-5007408f4cb8-config-data\") pod \"cinder-db-sync-qj2kj\" (UID: \"d9d0140e-e353-40a3-8970-5007408f4cb8\") " pod="openstack/cinder-db-sync-qj2kj" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.952353 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qxxk\" (UniqueName: \"kubernetes.io/projected/d9d0140e-e353-40a3-8970-5007408f4cb8-kube-api-access-6qxxk\") pod \"cinder-db-sync-qj2kj\" (UID: \"d9d0140e-e353-40a3-8970-5007408f4cb8\") " pod="openstack/cinder-db-sync-qj2kj" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.955525 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3e61e08-7ed1-43ed-a137-910b10e85e36-combined-ca-bundle\") pod \"placement-db-sync-d76qk\" (UID: \"b3e61e08-7ed1-43ed-a137-910b10e85e36\") " pod="openstack/placement-db-sync-d76qk" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.955631 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-p4bgr"] Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.958657 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9d0140e-e353-40a3-8970-5007408f4cb8-combined-ca-bundle\") pod \"cinder-db-sync-qj2kj\" (UID: \"d9d0140e-e353-40a3-8970-5007408f4cb8\") " pod="openstack/cinder-db-sync-qj2kj" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.971940 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-vmrr6"] Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.986086 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dd76f722-eb61-4676-9456-9a9bb443ef16-db-sync-config-data\") pod \"barbican-db-sync-p4bgr\" (UID: \"dd76f722-eb61-4676-9456-9a9bb443ef16\") " pod="openstack/barbican-db-sync-p4bgr" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.986251 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd76f722-eb61-4676-9456-9a9bb443ef16-combined-ca-bundle\") pod \"barbican-db-sync-p4bgr\" (UID: \"dd76f722-eb61-4676-9456-9a9bb443ef16\") " pod="openstack/barbican-db-sync-p4bgr" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.986344 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7d4eae6-781f-4675-a6c3-ee0f1589c735-config-data\") pod \"cloudkitty-db-sync-krnzs\" (UID: \"a7d4eae6-781f-4675-a6c3-ee0f1589c735\") " pod="openstack/cloudkitty-db-sync-krnzs" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.986390 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq54s\" (UniqueName: \"kubernetes.io/projected/a7d4eae6-781f-4675-a6c3-ee0f1589c735-kube-api-access-mq54s\") pod \"cloudkitty-db-sync-krnzs\" (UID: \"a7d4eae6-781f-4675-a6c3-ee0f1589c735\") " pod="openstack/cloudkitty-db-sync-krnzs" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.986426 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/a7d4eae6-781f-4675-a6c3-ee0f1589c735-certs\") pod \"cloudkitty-db-sync-krnzs\" (UID: \"a7d4eae6-781f-4675-a6c3-ee0f1589c735\") " pod="openstack/cloudkitty-db-sync-krnzs" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.986635 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7d4eae6-781f-4675-a6c3-ee0f1589c735-combined-ca-bundle\") pod \"cloudkitty-db-sync-krnzs\" (UID: \"a7d4eae6-781f-4675-a6c3-ee0f1589c735\") " pod="openstack/cloudkitty-db-sync-krnzs" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.986835 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz7gm\" (UniqueName: \"kubernetes.io/projected/dd76f722-eb61-4676-9456-9a9bb443ef16-kube-api-access-qz7gm\") pod \"barbican-db-sync-p4bgr\" (UID: \"dd76f722-eb61-4676-9456-9a9bb443ef16\") " pod="openstack/barbican-db-sync-p4bgr" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.986876 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7d4eae6-781f-4675-a6c3-ee0f1589c735-scripts\") pod \"cloudkitty-db-sync-krnzs\" (UID: \"a7d4eae6-781f-4675-a6c3-ee0f1589c735\") " pod="openstack/cloudkitty-db-sync-krnzs" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.991165 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6q6x6" Feb 16 13:54:00 crc kubenswrapper[4812]: I0216 13:54:00.992120 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.008273 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.010252 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dd76f722-eb61-4676-9456-9a9bb443ef16-db-sync-config-data\") pod \"barbican-db-sync-p4bgr\" (UID: \"dd76f722-eb61-4676-9456-9a9bb443ef16\") " pod="openstack/barbican-db-sync-p4bgr" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.012138 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.015854 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.019939 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz7gm\" (UniqueName: \"kubernetes.io/projected/dd76f722-eb61-4676-9456-9a9bb443ef16-kube-api-access-qz7gm\") pod \"barbican-db-sync-p4bgr\" (UID: \"dd76f722-eb61-4676-9456-9a9bb443ef16\") " pod="openstack/barbican-db-sync-p4bgr" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.023026 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-l5kzb"] Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.025278 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-l5kzb" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.032437 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.049581 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-l5kzb"] Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.090638 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20756ea3-8baf-4fba-92bc-7aa474e2bc0a-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-l5kzb\" (UID: \"20756ea3-8baf-4fba-92bc-7aa474e2bc0a\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-l5kzb" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.090897 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c359d03-e59e-4b85-8599-826a340acc8f-scripts\") pod \"ceilometer-0\" (UID: \"4c359d03-e59e-4b85-8599-826a340acc8f\") " pod="openstack/ceilometer-0" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.091249 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7d4eae6-781f-4675-a6c3-ee0f1589c735-scripts\") pod \"cloudkitty-db-sync-krnzs\" (UID: \"a7d4eae6-781f-4675-a6c3-ee0f1589c735\") " pod="openstack/cloudkitty-db-sync-krnzs" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.091337 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c359d03-e59e-4b85-8599-826a340acc8f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4c359d03-e59e-4b85-8599-826a340acc8f\") " pod="openstack/ceilometer-0" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.091406 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/20756ea3-8baf-4fba-92bc-7aa474e2bc0a-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-l5kzb\" (UID: \"20756ea3-8baf-4fba-92bc-7aa474e2bc0a\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-l5kzb" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.091558 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c359d03-e59e-4b85-8599-826a340acc8f-log-httpd\") pod \"ceilometer-0\" (UID: \"4c359d03-e59e-4b85-8599-826a340acc8f\") " pod="openstack/ceilometer-0" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.091687 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/20756ea3-8baf-4fba-92bc-7aa474e2bc0a-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-l5kzb\" (UID: \"20756ea3-8baf-4fba-92bc-7aa474e2bc0a\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-l5kzb" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.091756 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7d4eae6-781f-4675-a6c3-ee0f1589c735-config-data\") pod \"cloudkitty-db-sync-krnzs\" (UID: \"a7d4eae6-781f-4675-a6c3-ee0f1589c735\") " pod="openstack/cloudkitty-db-sync-krnzs" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.091829 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20756ea3-8baf-4fba-92bc-7aa474e2bc0a-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-l5kzb\" (UID: \"20756ea3-8baf-4fba-92bc-7aa474e2bc0a\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-l5kzb" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.091933 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq54s\" (UniqueName: \"kubernetes.io/projected/a7d4eae6-781f-4675-a6c3-ee0f1589c735-kube-api-access-mq54s\") pod \"cloudkitty-db-sync-krnzs\" (UID: \"a7d4eae6-781f-4675-a6c3-ee0f1589c735\") " pod="openstack/cloudkitty-db-sync-krnzs" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.092019 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/a7d4eae6-781f-4675-a6c3-ee0f1589c735-certs\") pod \"cloudkitty-db-sync-krnzs\" (UID: \"a7d4eae6-781f-4675-a6c3-ee0f1589c735\") " pod="openstack/cloudkitty-db-sync-krnzs" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.092122 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt295\" (UniqueName: \"kubernetes.io/projected/20756ea3-8baf-4fba-92bc-7aa474e2bc0a-kube-api-access-rt295\") pod \"dnsmasq-dns-58dd9ff6bc-l5kzb\" (UID: \"20756ea3-8baf-4fba-92bc-7aa474e2bc0a\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-l5kzb" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.092228 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4c359d03-e59e-4b85-8599-826a340acc8f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4c359d03-e59e-4b85-8599-826a340acc8f\") " pod="openstack/ceilometer-0" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.092294 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7d4eae6-781f-4675-a6c3-ee0f1589c735-combined-ca-bundle\") pod \"cloudkitty-db-sync-krnzs\" (UID: \"a7d4eae6-781f-4675-a6c3-ee0f1589c735\") " pod="openstack/cloudkitty-db-sync-krnzs" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.092408 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c359d03-e59e-4b85-8599-826a340acc8f-run-httpd\") pod \"ceilometer-0\" (UID: \"4c359d03-e59e-4b85-8599-826a340acc8f\") " pod="openstack/ceilometer-0" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.092511 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxsq7\" (UniqueName: \"kubernetes.io/projected/4c359d03-e59e-4b85-8599-826a340acc8f-kube-api-access-fxsq7\") pod \"ceilometer-0\" (UID: \"4c359d03-e59e-4b85-8599-826a340acc8f\") " pod="openstack/ceilometer-0" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.092615 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20756ea3-8baf-4fba-92bc-7aa474e2bc0a-config\") pod \"dnsmasq-dns-58dd9ff6bc-l5kzb\" (UID: \"20756ea3-8baf-4fba-92bc-7aa474e2bc0a\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-l5kzb" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.092751 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c359d03-e59e-4b85-8599-826a340acc8f-config-data\") pod \"ceilometer-0\" (UID: \"4c359d03-e59e-4b85-8599-826a340acc8f\") " pod="openstack/ceilometer-0" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.136952 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qj2kj" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.147927 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-khzht"] Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.150233 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-d76qk" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.209126 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20756ea3-8baf-4fba-92bc-7aa474e2bc0a-config\") pod \"dnsmasq-dns-58dd9ff6bc-l5kzb\" (UID: \"20756ea3-8baf-4fba-92bc-7aa474e2bc0a\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-l5kzb" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.209285 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c359d03-e59e-4b85-8599-826a340acc8f-config-data\") pod \"ceilometer-0\" (UID: \"4c359d03-e59e-4b85-8599-826a340acc8f\") " pod="openstack/ceilometer-0" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.209366 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20756ea3-8baf-4fba-92bc-7aa474e2bc0a-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-l5kzb\" (UID: \"20756ea3-8baf-4fba-92bc-7aa474e2bc0a\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-l5kzb" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.209409 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c359d03-e59e-4b85-8599-826a340acc8f-scripts\") pod \"ceilometer-0\" (UID: \"4c359d03-e59e-4b85-8599-826a340acc8f\") " pod="openstack/ceilometer-0" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.209524 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c359d03-e59e-4b85-8599-826a340acc8f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4c359d03-e59e-4b85-8599-826a340acc8f\") " pod="openstack/ceilometer-0" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.209558 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/20756ea3-8baf-4fba-92bc-7aa474e2bc0a-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-l5kzb\" (UID: \"20756ea3-8baf-4fba-92bc-7aa474e2bc0a\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-l5kzb" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.209663 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c359d03-e59e-4b85-8599-826a340acc8f-log-httpd\") pod \"ceilometer-0\" (UID: \"4c359d03-e59e-4b85-8599-826a340acc8f\") " pod="openstack/ceilometer-0" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.209808 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/20756ea3-8baf-4fba-92bc-7aa474e2bc0a-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-l5kzb\" (UID: \"20756ea3-8baf-4fba-92bc-7aa474e2bc0a\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-l5kzb" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.209880 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20756ea3-8baf-4fba-92bc-7aa474e2bc0a-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-l5kzb\" (UID: \"20756ea3-8baf-4fba-92bc-7aa474e2bc0a\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-l5kzb" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.210012 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt295\" (UniqueName: \"kubernetes.io/projected/20756ea3-8baf-4fba-92bc-7aa474e2bc0a-kube-api-access-rt295\") pod \"dnsmasq-dns-58dd9ff6bc-l5kzb\" (UID: \"20756ea3-8baf-4fba-92bc-7aa474e2bc0a\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-l5kzb" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.210080 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4c359d03-e59e-4b85-8599-826a340acc8f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4c359d03-e59e-4b85-8599-826a340acc8f\") " pod="openstack/ceilometer-0" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.210192 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c359d03-e59e-4b85-8599-826a340acc8f-run-httpd\") pod \"ceilometer-0\" (UID: \"4c359d03-e59e-4b85-8599-826a340acc8f\") " pod="openstack/ceilometer-0" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.210253 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxsq7\" (UniqueName: \"kubernetes.io/projected/4c359d03-e59e-4b85-8599-826a340acc8f-kube-api-access-fxsq7\") pod \"ceilometer-0\" (UID: \"4c359d03-e59e-4b85-8599-826a340acc8f\") " pod="openstack/ceilometer-0" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.213867 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c359d03-e59e-4b85-8599-826a340acc8f-log-httpd\") pod \"ceilometer-0\" (UID: \"4c359d03-e59e-4b85-8599-826a340acc8f\") " pod="openstack/ceilometer-0" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.215035 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20756ea3-8baf-4fba-92bc-7aa474e2bc0a-config\") pod \"dnsmasq-dns-58dd9ff6bc-l5kzb\" (UID: \"20756ea3-8baf-4fba-92bc-7aa474e2bc0a\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-l5kzb" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.216202 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/20756ea3-8baf-4fba-92bc-7aa474e2bc0a-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-l5kzb\" (UID: \"20756ea3-8baf-4fba-92bc-7aa474e2bc0a\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-l5kzb" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.217089 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20756ea3-8baf-4fba-92bc-7aa474e2bc0a-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-l5kzb\" (UID: \"20756ea3-8baf-4fba-92bc-7aa474e2bc0a\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-l5kzb" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.218038 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c359d03-e59e-4b85-8599-826a340acc8f-run-httpd\") pod \"ceilometer-0\" (UID: \"4c359d03-e59e-4b85-8599-826a340acc8f\") " pod="openstack/ceilometer-0" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.219808 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20756ea3-8baf-4fba-92bc-7aa474e2bc0a-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-l5kzb\" (UID: \"20756ea3-8baf-4fba-92bc-7aa474e2bc0a\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-l5kzb" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.220502 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/20756ea3-8baf-4fba-92bc-7aa474e2bc0a-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-l5kzb\" (UID: \"20756ea3-8baf-4fba-92bc-7aa474e2bc0a\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-l5kzb" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.274289 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd76f722-eb61-4676-9456-9a9bb443ef16-combined-ca-bundle\") pod \"barbican-db-sync-p4bgr\" (UID: \"dd76f722-eb61-4676-9456-9a9bb443ef16\") " pod="openstack/barbican-db-sync-p4bgr" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.295015 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7d4eae6-781f-4675-a6c3-ee0f1589c735-scripts\") pod \"cloudkitty-db-sync-krnzs\" (UID: \"a7d4eae6-781f-4675-a6c3-ee0f1589c735\") " pod="openstack/cloudkitty-db-sync-krnzs" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.295914 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7d4eae6-781f-4675-a6c3-ee0f1589c735-config-data\") pod \"cloudkitty-db-sync-krnzs\" (UID: \"a7d4eae6-781f-4675-a6c3-ee0f1589c735\") " pod="openstack/cloudkitty-db-sync-krnzs" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.296329 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7d4eae6-781f-4675-a6c3-ee0f1589c735-combined-ca-bundle\") pod \"cloudkitty-db-sync-krnzs\" (UID: \"a7d4eae6-781f-4675-a6c3-ee0f1589c735\") " pod="openstack/cloudkitty-db-sync-krnzs" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.296785 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/a7d4eae6-781f-4675-a6c3-ee0f1589c735-certs\") pod \"cloudkitty-db-sync-krnzs\" (UID: \"a7d4eae6-781f-4675-a6c3-ee0f1589c735\") " pod="openstack/cloudkitty-db-sync-krnzs" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.296896 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq54s\" (UniqueName: \"kubernetes.io/projected/a7d4eae6-781f-4675-a6c3-ee0f1589c735-kube-api-access-mq54s\") pod \"cloudkitty-db-sync-krnzs\" (UID: \"a7d4eae6-781f-4675-a6c3-ee0f1589c735\") " pod="openstack/cloudkitty-db-sync-krnzs" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.300386 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c359d03-e59e-4b85-8599-826a340acc8f-config-data\") pod \"ceilometer-0\" (UID: \"4c359d03-e59e-4b85-8599-826a340acc8f\") " pod="openstack/ceilometer-0" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.302647 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c359d03-e59e-4b85-8599-826a340acc8f-scripts\") pod \"ceilometer-0\" (UID: \"4c359d03-e59e-4b85-8599-826a340acc8f\") " pod="openstack/ceilometer-0" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.304710 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4c359d03-e59e-4b85-8599-826a340acc8f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4c359d03-e59e-4b85-8599-826a340acc8f\") " pod="openstack/ceilometer-0" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.305621 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c359d03-e59e-4b85-8599-826a340acc8f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4c359d03-e59e-4b85-8599-826a340acc8f\") " pod="openstack/ceilometer-0" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.309719 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxsq7\" (UniqueName: \"kubernetes.io/projected/4c359d03-e59e-4b85-8599-826a340acc8f-kube-api-access-fxsq7\") pod \"ceilometer-0\" (UID: \"4c359d03-e59e-4b85-8599-826a340acc8f\") " pod="openstack/ceilometer-0" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.311904 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt295\" (UniqueName: \"kubernetes.io/projected/20756ea3-8baf-4fba-92bc-7aa474e2bc0a-kube-api-access-rt295\") pod \"dnsmasq-dns-58dd9ff6bc-l5kzb\" (UID: \"20756ea3-8baf-4fba-92bc-7aa474e2bc0a\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-l5kzb" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.493176 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-p4bgr" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.497546 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-vmrr6"] Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.499474 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-krnzs" Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.713155 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-khzht" event={"ID":"9e9a8b01-3875-4489-8598-377dfdac550f","Type":"ContainerStarted","Data":"1fe7273e0bc9aa83d0d11572c23d927de0a24067c0bdbd385f7566fce7ec0dce"} Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.714720 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-vmrr6" event={"ID":"c5f831e6-8de4-498e-9337-0e9c274c2af6","Type":"ContainerStarted","Data":"ca67b0cb43e185c61925cef520a1494cc168d857812a1f08161751d86a328387"} Feb 16 13:54:01 crc kubenswrapper[4812]: I0216 13:54:01.786962 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-6q6x6"] Feb 16 13:54:02 crc kubenswrapper[4812]: E0216 13:54:02.051714 4812 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 16 13:54:02 crc kubenswrapper[4812]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/ceb8fdfb-dd06-417c-91db-9b6843d52984/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 16 13:54:02 crc kubenswrapper[4812]: > podSandboxID="20a770c95407522eabbd85d836f5df567ff72b9a797b7fee694974ee33b51634" Feb 16 13:54:02 crc kubenswrapper[4812]: E0216 13:54:02.052042 4812 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 16 13:54:02 crc kubenswrapper[4812]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66chbbh56dh7fhfh68chf9hfdhbdh587h5b9h568h68fh77h5b5h559h577h687h574h5d5h584h8chd9hb4h66h566h545h699h564h568h66fhc9q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-swift-storage-0,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-swift-storage-0,SubPath:dns-swift-storage-0,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qskxm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-764c5664d7-qql9l_openstack(ceb8fdfb-dd06-417c-91db-9b6843d52984): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/ceb8fdfb-dd06-417c-91db-9b6843d52984/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 16 13:54:02 crc kubenswrapper[4812]: > logger="UnhandledError" Feb 16 13:54:02 crc kubenswrapper[4812]: E0216 13:54:02.053729 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/ceb8fdfb-dd06-417c-91db-9b6843d52984/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-764c5664d7-qql9l" podUID="ceb8fdfb-dd06-417c-91db-9b6843d52984" Feb 16 13:54:02 crc kubenswrapper[4812]: E0216 13:54:02.636054 4812 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podceb8fdfb_dd06_417c_91db_9b6843d52984.slice/crio-conmon-0b907a78f8f3de28e6398d1cc30ed0855304bbf9e3138a731511c714e186fe2b.scope\": RecentStats: unable to find data in memory cache]" Feb 16 13:54:02 crc kubenswrapper[4812]: I0216 13:54:02.668542 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:54:02 crc kubenswrapper[4812]: I0216 13:54:02.680825 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-l5kzb" Feb 16 13:54:02 crc kubenswrapper[4812]: I0216 13:54:02.758780 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-qj2kj"] Feb 16 13:54:02 crc kubenswrapper[4812]: I0216 13:54:02.820230 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6q6x6" event={"ID":"a35f33f0-33ff-4938-b15a-455a830ac631","Type":"ContainerStarted","Data":"457fd8b93d42f1a4a6424d7c4c5f0f25552add40d76b19450a0f67074e6ff355"} Feb 16 13:54:02 crc kubenswrapper[4812]: I0216 13:54:02.844176 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e02a9868-e12c-4a65-9ba5-4a5965131b5b","Type":"ContainerStarted","Data":"c2421c6935d64b00dae4f5f2e2ad4de12d675fde6a818677e5b84fa6f212904b"} Feb 16 13:54:03 crc kubenswrapper[4812]: I0216 13:54:03.027852 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-d76qk"] Feb 16 13:54:03 crc kubenswrapper[4812]: I0216 13:54:03.334985 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-krnzs"] Feb 16 13:54:03 crc kubenswrapper[4812]: I0216 13:54:03.938722 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-krnzs" event={"ID":"a7d4eae6-781f-4675-a6c3-ee0f1589c735","Type":"ContainerStarted","Data":"452037f98888f0ae1bf2e223c6b5cca9059edbf0e2eed8f4c0982d6e0b1dbd11"} Feb 16 13:54:03 crc kubenswrapper[4812]: E0216 13:54:03.974917 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 13:54:03 crc kubenswrapper[4812]: E0216 13:54:03.975039 4812 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 13:54:03 crc kubenswrapper[4812]: E0216 13:54:03.975246 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mq54s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-krnzs_openstack(a7d4eae6-781f-4675-a6c3-ee0f1589c735): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 13:54:03 crc kubenswrapper[4812]: E0216 13:54:03.986360 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 13:54:03 crc kubenswrapper[4812]: I0216 13:54:03.987914 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qj2kj" event={"ID":"d9d0140e-e353-40a3-8970-5007408f4cb8","Type":"ContainerStarted","Data":"34ce48dc251da52807c6f00c9bf5f69700656e45e007a7fc9a517214e9b5551c"} Feb 16 13:54:03 crc kubenswrapper[4812]: I0216 13:54:03.991202 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:54:04 crc kubenswrapper[4812]: I0216 13:54:04.050953 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6q6x6" event={"ID":"a35f33f0-33ff-4938-b15a-455a830ac631","Type":"ContainerStarted","Data":"4f93cb8c7224bf58c7d9140abffaf7b9a8aea79dd27a2b796acaaf74c8817355"} Feb 16 13:54:04 crc kubenswrapper[4812]: I0216 13:54:04.068697 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-khzht" event={"ID":"9e9a8b01-3875-4489-8598-377dfdac550f","Type":"ContainerStarted","Data":"ae91021ff1ab83203f88cd169795174a4f1816886a22a8fb2e0e2791cf4af841"} Feb 16 13:54:04 crc kubenswrapper[4812]: I0216 13:54:04.095205 4812 generic.go:334] "Generic (PLEG): container finished" podID="c5f831e6-8de4-498e-9337-0e9c274c2af6" containerID="24190ada3ef798acb6e946ca51db6cb6e089b39c1935cda71d4b43aa48fc417d" exitCode=0 Feb 16 13:54:04 crc kubenswrapper[4812]: I0216 13:54:04.095308 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-vmrr6" event={"ID":"c5f831e6-8de4-498e-9337-0e9c274c2af6","Type":"ContainerDied","Data":"24190ada3ef798acb6e946ca51db6cb6e089b39c1935cda71d4b43aa48fc417d"} Feb 16 13:54:04 crc kubenswrapper[4812]: I0216 13:54:04.116640 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-p4bgr"] Feb 16 13:54:04 crc kubenswrapper[4812]: I0216 13:54:04.128126 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-d76qk" event={"ID":"b3e61e08-7ed1-43ed-a137-910b10e85e36","Type":"ContainerStarted","Data":"8c7bfb705356ea1ff433f2fae322fc5b4bae99c61cdf8d9d3212fe10f9cee03b"} Feb 16 13:54:04 crc kubenswrapper[4812]: I0216 13:54:04.130544 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-6q6x6" podStartSLOduration=4.130516935 podStartE2EDuration="4.130516935s" podCreationTimestamp="2026-02-16 13:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:54:04.080707407 +0000 UTC m=+1333.145038128" watchObservedRunningTime="2026-02-16 13:54:04.130516935 +0000 UTC m=+1333.194847636" Feb 16 13:54:04 crc kubenswrapper[4812]: W0216 13:54:04.160877 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd76f722_eb61_4676_9456_9a9bb443ef16.slice/crio-9f919e783f6f0439740855420f94fd3d5ca6fa05edcc9d4f510a148ad002922e WatchSource:0}: Error finding container 9f919e783f6f0439740855420f94fd3d5ca6fa05edcc9d4f510a148ad002922e: Status 404 returned error can't find the container with id 9f919e783f6f0439740855420f94fd3d5ca6fa05edcc9d4f510a148ad002922e Feb 16 13:54:04 crc kubenswrapper[4812]: I0216 13:54:04.167211 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-khzht" podStartSLOduration=6.16718019 podStartE2EDuration="6.16718019s" podCreationTimestamp="2026-02-16 13:53:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:54:04.128882917 +0000 UTC m=+1333.193213628" watchObservedRunningTime="2026-02-16 13:54:04.16718019 +0000 UTC m=+1333.231510891" Feb 16 13:54:04 crc kubenswrapper[4812]: I0216 13:54:04.507641 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-qql9l" Feb 16 13:54:04 crc kubenswrapper[4812]: I0216 13:54:04.634803 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qskxm\" (UniqueName: \"kubernetes.io/projected/ceb8fdfb-dd06-417c-91db-9b6843d52984-kube-api-access-qskxm\") pod \"ceb8fdfb-dd06-417c-91db-9b6843d52984\" (UID: \"ceb8fdfb-dd06-417c-91db-9b6843d52984\") " Feb 16 13:54:04 crc kubenswrapper[4812]: I0216 13:54:04.635056 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ceb8fdfb-dd06-417c-91db-9b6843d52984-dns-svc\") pod \"ceb8fdfb-dd06-417c-91db-9b6843d52984\" (UID: \"ceb8fdfb-dd06-417c-91db-9b6843d52984\") " Feb 16 13:54:04 crc kubenswrapper[4812]: I0216 13:54:04.636392 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ceb8fdfb-dd06-417c-91db-9b6843d52984-dns-swift-storage-0\") pod \"ceb8fdfb-dd06-417c-91db-9b6843d52984\" (UID: \"ceb8fdfb-dd06-417c-91db-9b6843d52984\") " Feb 16 13:54:04 crc kubenswrapper[4812]: I0216 13:54:04.638464 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ceb8fdfb-dd06-417c-91db-9b6843d52984-ovsdbserver-sb\") pod \"ceb8fdfb-dd06-417c-91db-9b6843d52984\" (UID: \"ceb8fdfb-dd06-417c-91db-9b6843d52984\") " Feb 16 13:54:04 crc kubenswrapper[4812]: I0216 13:54:04.638566 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceb8fdfb-dd06-417c-91db-9b6843d52984-config\") pod \"ceb8fdfb-dd06-417c-91db-9b6843d52984\" (UID: \"ceb8fdfb-dd06-417c-91db-9b6843d52984\") " Feb 16 13:54:04 crc kubenswrapper[4812]: I0216 13:54:04.638596 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ceb8fdfb-dd06-417c-91db-9b6843d52984-ovsdbserver-nb\") pod \"ceb8fdfb-dd06-417c-91db-9b6843d52984\" (UID: \"ceb8fdfb-dd06-417c-91db-9b6843d52984\") " Feb 16 13:54:04 crc kubenswrapper[4812]: I0216 13:54:04.692169 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceb8fdfb-dd06-417c-91db-9b6843d52984-kube-api-access-qskxm" (OuterVolumeSpecName: "kube-api-access-qskxm") pod "ceb8fdfb-dd06-417c-91db-9b6843d52984" (UID: "ceb8fdfb-dd06-417c-91db-9b6843d52984"). InnerVolumeSpecName "kube-api-access-qskxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:54:04 crc kubenswrapper[4812]: I0216 13:54:04.740877 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:54:04 crc kubenswrapper[4812]: I0216 13:54:04.749725 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qskxm\" (UniqueName: \"kubernetes.io/projected/ceb8fdfb-dd06-417c-91db-9b6843d52984-kube-api-access-qskxm\") on node \"crc\" DevicePath \"\"" Feb 16 13:54:04 crc kubenswrapper[4812]: I0216 13:54:04.775879 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-l5kzb"] Feb 16 13:54:04 crc kubenswrapper[4812]: W0216 13:54:04.782620 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c359d03_e59e_4b85_8599_826a340acc8f.slice/crio-121692d1b15b82347e325fdf9228416664ab4b0645aa2e9f78f09d7111889a80 WatchSource:0}: Error finding container 121692d1b15b82347e325fdf9228416664ab4b0645aa2e9f78f09d7111889a80: Status 404 returned error can't find the container with id 121692d1b15b82347e325fdf9228416664ab4b0645aa2e9f78f09d7111889a80 Feb 16 13:54:05 crc kubenswrapper[4812]: I0216 13:54:05.224469 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceb8fdfb-dd06-417c-91db-9b6843d52984-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ceb8fdfb-dd06-417c-91db-9b6843d52984" (UID: "ceb8fdfb-dd06-417c-91db-9b6843d52984"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:54:05 crc kubenswrapper[4812]: I0216 13:54:05.231372 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceb8fdfb-dd06-417c-91db-9b6843d52984-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ceb8fdfb-dd06-417c-91db-9b6843d52984" (UID: "ceb8fdfb-dd06-417c-91db-9b6843d52984"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:54:05 crc kubenswrapper[4812]: I0216 13:54:05.265600 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceb8fdfb-dd06-417c-91db-9b6843d52984-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ceb8fdfb-dd06-417c-91db-9b6843d52984" (UID: "ceb8fdfb-dd06-417c-91db-9b6843d52984"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:54:05 crc kubenswrapper[4812]: I0216 13:54:05.288277 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ceb8fdfb-dd06-417c-91db-9b6843d52984-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:54:05 crc kubenswrapper[4812]: I0216 13:54:05.290109 4812 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ceb8fdfb-dd06-417c-91db-9b6843d52984-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:54:05 crc kubenswrapper[4812]: I0216 13:54:05.290215 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ceb8fdfb-dd06-417c-91db-9b6843d52984-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 13:54:05 crc kubenswrapper[4812]: I0216 13:54:05.354856 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceb8fdfb-dd06-417c-91db-9b6843d52984-config" (OuterVolumeSpecName: "config") pod "ceb8fdfb-dd06-417c-91db-9b6843d52984" (UID: "ceb8fdfb-dd06-417c-91db-9b6843d52984"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:54:05 crc kubenswrapper[4812]: I0216 13:54:05.390432 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-qql9l" event={"ID":"ceb8fdfb-dd06-417c-91db-9b6843d52984","Type":"ContainerDied","Data":"20a770c95407522eabbd85d836f5df567ff72b9a797b7fee694974ee33b51634"} Feb 16 13:54:05 crc kubenswrapper[4812]: I0216 13:54:05.390877 4812 scope.go:117] "RemoveContainer" containerID="5d2ac037a91f96dfa4056d7f1ce34eae113ecfef5454086c4efc2ed0d015624c" Feb 16 13:54:05 crc kubenswrapper[4812]: I0216 13:54:05.391119 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-qql9l" Feb 16 13:54:05 crc kubenswrapper[4812]: I0216 13:54:05.394725 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceb8fdfb-dd06-417c-91db-9b6843d52984-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:54:05 crc kubenswrapper[4812]: I0216 13:54:05.400273 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceb8fdfb-dd06-417c-91db-9b6843d52984-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ceb8fdfb-dd06-417c-91db-9b6843d52984" (UID: "ceb8fdfb-dd06-417c-91db-9b6843d52984"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:54:05 crc kubenswrapper[4812]: I0216 13:54:05.406273 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-l5kzb" event={"ID":"20756ea3-8baf-4fba-92bc-7aa474e2bc0a","Type":"ContainerStarted","Data":"d66a1fa4aeadd42d3efef7e5a78160db092f36292c071b79fc5031385ff5d784"} Feb 16 13:54:05 crc kubenswrapper[4812]: I0216 13:54:05.430220 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-p4bgr" event={"ID":"dd76f722-eb61-4676-9456-9a9bb443ef16","Type":"ContainerStarted","Data":"9f919e783f6f0439740855420f94fd3d5ca6fa05edcc9d4f510a148ad002922e"} Feb 16 13:54:05 crc kubenswrapper[4812]: I0216 13:54:05.440589 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4c359d03-e59e-4b85-8599-826a340acc8f","Type":"ContainerStarted","Data":"121692d1b15b82347e325fdf9228416664ab4b0645aa2e9f78f09d7111889a80"} Feb 16 13:54:05 crc kubenswrapper[4812]: I0216 13:54:05.500115 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ceb8fdfb-dd06-417c-91db-9b6843d52984-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 13:54:05 crc kubenswrapper[4812]: E0216 13:54:05.687526 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 13:54:05 crc kubenswrapper[4812]: I0216 13:54:05.729621 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-vmrr6" Feb 16 13:54:05 crc kubenswrapper[4812]: I0216 13:54:05.859892 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skhxd\" (UniqueName: \"kubernetes.io/projected/c5f831e6-8de4-498e-9337-0e9c274c2af6-kube-api-access-skhxd\") pod \"c5f831e6-8de4-498e-9337-0e9c274c2af6\" (UID: \"c5f831e6-8de4-498e-9337-0e9c274c2af6\") " Feb 16 13:54:05 crc kubenswrapper[4812]: I0216 13:54:05.859973 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5f831e6-8de4-498e-9337-0e9c274c2af6-ovsdbserver-sb\") pod \"c5f831e6-8de4-498e-9337-0e9c274c2af6\" (UID: \"c5f831e6-8de4-498e-9337-0e9c274c2af6\") " Feb 16 13:54:05 crc kubenswrapper[4812]: I0216 13:54:05.860198 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5f831e6-8de4-498e-9337-0e9c274c2af6-dns-svc\") pod \"c5f831e6-8de4-498e-9337-0e9c274c2af6\" (UID: \"c5f831e6-8de4-498e-9337-0e9c274c2af6\") " Feb 16 13:54:05 crc kubenswrapper[4812]: I0216 13:54:05.860221 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5f831e6-8de4-498e-9337-0e9c274c2af6-ovsdbserver-nb\") pod \"c5f831e6-8de4-498e-9337-0e9c274c2af6\" (UID: \"c5f831e6-8de4-498e-9337-0e9c274c2af6\") " Feb 16 13:54:05 crc kubenswrapper[4812]: I0216 13:54:05.860243 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5f831e6-8de4-498e-9337-0e9c274c2af6-dns-swift-storage-0\") pod \"c5f831e6-8de4-498e-9337-0e9c274c2af6\" (UID: \"c5f831e6-8de4-498e-9337-0e9c274c2af6\") " Feb 16 13:54:05 crc kubenswrapper[4812]: I0216 13:54:05.860272 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f831e6-8de4-498e-9337-0e9c274c2af6-config\") pod \"c5f831e6-8de4-498e-9337-0e9c274c2af6\" (UID: \"c5f831e6-8de4-498e-9337-0e9c274c2af6\") " Feb 16 13:54:05 crc kubenswrapper[4812]: I0216 13:54:05.866547 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-qql9l"] Feb 16 13:54:05 crc kubenswrapper[4812]: I0216 13:54:05.947283 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f831e6-8de4-498e-9337-0e9c274c2af6-config" (OuterVolumeSpecName: "config") pod "c5f831e6-8de4-498e-9337-0e9c274c2af6" (UID: "c5f831e6-8de4-498e-9337-0e9c274c2af6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:54:05 crc kubenswrapper[4812]: I0216 13:54:05.947331 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f831e6-8de4-498e-9337-0e9c274c2af6-kube-api-access-skhxd" (OuterVolumeSpecName: "kube-api-access-skhxd") pod "c5f831e6-8de4-498e-9337-0e9c274c2af6" (UID: "c5f831e6-8de4-498e-9337-0e9c274c2af6"). InnerVolumeSpecName "kube-api-access-skhxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:54:05 crc kubenswrapper[4812]: I0216 13:54:05.950334 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f831e6-8de4-498e-9337-0e9c274c2af6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c5f831e6-8de4-498e-9337-0e9c274c2af6" (UID: "c5f831e6-8de4-498e-9337-0e9c274c2af6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:54:05 crc kubenswrapper[4812]: I0216 13:54:05.959802 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f831e6-8de4-498e-9337-0e9c274c2af6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c5f831e6-8de4-498e-9337-0e9c274c2af6" (UID: "c5f831e6-8de4-498e-9337-0e9c274c2af6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:54:05 crc kubenswrapper[4812]: I0216 13:54:05.970356 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5f831e6-8de4-498e-9337-0e9c274c2af6-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:54:05 crc kubenswrapper[4812]: I0216 13:54:05.970411 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5f831e6-8de4-498e-9337-0e9c274c2af6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 13:54:05 crc kubenswrapper[4812]: I0216 13:54:05.970425 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f831e6-8de4-498e-9337-0e9c274c2af6-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:54:05 crc kubenswrapper[4812]: I0216 13:54:05.970437 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skhxd\" (UniqueName: \"kubernetes.io/projected/c5f831e6-8de4-498e-9337-0e9c274c2af6-kube-api-access-skhxd\") on node \"crc\" DevicePath \"\"" Feb 16 13:54:06 crc kubenswrapper[4812]: I0216 13:54:06.012171 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f831e6-8de4-498e-9337-0e9c274c2af6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c5f831e6-8de4-498e-9337-0e9c274c2af6" (UID: "c5f831e6-8de4-498e-9337-0e9c274c2af6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:54:06 crc kubenswrapper[4812]: I0216 13:54:06.047049 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f831e6-8de4-498e-9337-0e9c274c2af6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c5f831e6-8de4-498e-9337-0e9c274c2af6" (UID: "c5f831e6-8de4-498e-9337-0e9c274c2af6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:54:06 crc kubenswrapper[4812]: I0216 13:54:06.062058 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-qql9l"] Feb 16 13:54:06 crc kubenswrapper[4812]: I0216 13:54:06.073996 4812 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5f831e6-8de4-498e-9337-0e9c274c2af6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:54:06 crc kubenswrapper[4812]: I0216 13:54:06.074087 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5f831e6-8de4-498e-9337-0e9c274c2af6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 13:54:06 crc kubenswrapper[4812]: I0216 13:54:06.555825 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-vmrr6" event={"ID":"c5f831e6-8de4-498e-9337-0e9c274c2af6","Type":"ContainerDied","Data":"ca67b0cb43e185c61925cef520a1494cc168d857812a1f08161751d86a328387"} Feb 16 13:54:06 crc kubenswrapper[4812]: I0216 13:54:06.556367 4812 scope.go:117] "RemoveContainer" containerID="24190ada3ef798acb6e946ca51db6cb6e089b39c1935cda71d4b43aa48fc417d" Feb 16 13:54:06 crc kubenswrapper[4812]: I0216 13:54:06.556575 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-vmrr6" Feb 16 13:54:06 crc kubenswrapper[4812]: I0216 13:54:06.754381 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-vmrr6"] Feb 16 13:54:06 crc kubenswrapper[4812]: I0216 13:54:06.800308 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-vmrr6"] Feb 16 13:54:07 crc kubenswrapper[4812]: I0216 13:54:07.659217 4812 generic.go:334] "Generic (PLEG): container finished" podID="20756ea3-8baf-4fba-92bc-7aa474e2bc0a" containerID="ea29abf9c8be4ffb3128c6637f3f4ab0261e583d2ce6f7050d3ab615883bf7d5" exitCode=0 Feb 16 13:54:07 crc kubenswrapper[4812]: I0216 13:54:07.660415 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-l5kzb" event={"ID":"20756ea3-8baf-4fba-92bc-7aa474e2bc0a","Type":"ContainerDied","Data":"ea29abf9c8be4ffb3128c6637f3f4ab0261e583d2ce6f7050d3ab615883bf7d5"} Feb 16 13:54:07 crc kubenswrapper[4812]: I0216 13:54:07.927830 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f831e6-8de4-498e-9337-0e9c274c2af6" path="/var/lib/kubelet/pods/c5f831e6-8de4-498e-9337-0e9c274c2af6/volumes" Feb 16 13:54:07 crc kubenswrapper[4812]: I0216 13:54:07.928608 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ceb8fdfb-dd06-417c-91db-9b6843d52984" path="/var/lib/kubelet/pods/ceb8fdfb-dd06-417c-91db-9b6843d52984/volumes" Feb 16 13:54:08 crc kubenswrapper[4812]: I0216 13:54:08.704047 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-l5kzb" event={"ID":"20756ea3-8baf-4fba-92bc-7aa474e2bc0a","Type":"ContainerStarted","Data":"fa9b1ec80a159bf3befd366e3859002075846afdd09adb7f87876632faf2e37e"} Feb 16 13:54:08 crc kubenswrapper[4812]: I0216 13:54:08.707653 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-l5kzb" Feb 16 13:54:12 crc kubenswrapper[4812]: I0216 13:54:12.062474 4812 generic.go:334] "Generic (PLEG): container finished" podID="03e0b815-7641-435c-9934-05f5c5307962" containerID="bde09c54d3755326e46294c9aa3086a0cacb04f9e964ac8d8b7cd14f37f0b309" exitCode=0 Feb 16 13:54:12 crc kubenswrapper[4812]: I0216 13:54:12.065484 4812 generic.go:334] "Generic (PLEG): container finished" podID="9e9a8b01-3875-4489-8598-377dfdac550f" containerID="ae91021ff1ab83203f88cd169795174a4f1816886a22a8fb2e0e2791cf4af841" exitCode=0 Feb 16 13:54:12 crc kubenswrapper[4812]: I0216 13:54:12.078134 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mwzf9" event={"ID":"03e0b815-7641-435c-9934-05f5c5307962","Type":"ContainerDied","Data":"bde09c54d3755326e46294c9aa3086a0cacb04f9e964ac8d8b7cd14f37f0b309"} Feb 16 13:54:12 crc kubenswrapper[4812]: I0216 13:54:12.078223 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-khzht" event={"ID":"9e9a8b01-3875-4489-8598-377dfdac550f","Type":"ContainerDied","Data":"ae91021ff1ab83203f88cd169795174a4f1816886a22a8fb2e0e2791cf4af841"} Feb 16 13:54:12 crc kubenswrapper[4812]: I0216 13:54:12.115874 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58dd9ff6bc-l5kzb" podStartSLOduration=12.115847623 podStartE2EDuration="12.115847623s" podCreationTimestamp="2026-02-16 13:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:54:08.750129142 +0000 UTC m=+1337.814459843" watchObservedRunningTime="2026-02-16 13:54:12.115847623 +0000 UTC m=+1341.180178324" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.135350 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mwzf9" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.136819 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mwzf9" event={"ID":"03e0b815-7641-435c-9934-05f5c5307962","Type":"ContainerDied","Data":"3e0ff0760e8638fa5a6197fd3faaa765372a565ab5a53b4cd2e9dc3296c7d27b"} Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.136856 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e0ff0760e8638fa5a6197fd3faaa765372a565ab5a53b4cd2e9dc3296c7d27b" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.146545 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-khzht" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.148266 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-khzht" event={"ID":"9e9a8b01-3875-4489-8598-377dfdac550f","Type":"ContainerDied","Data":"1fe7273e0bc9aa83d0d11572c23d927de0a24067c0bdbd385f7566fce7ec0dce"} Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.148340 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fe7273e0bc9aa83d0d11572c23d927de0a24067c0bdbd385f7566fce7ec0dce" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.206581 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/03e0b815-7641-435c-9934-05f5c5307962-db-sync-config-data\") pod \"03e0b815-7641-435c-9934-05f5c5307962\" (UID: \"03e0b815-7641-435c-9934-05f5c5307962\") " Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.206728 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03e0b815-7641-435c-9934-05f5c5307962-config-data\") pod \"03e0b815-7641-435c-9934-05f5c5307962\" (UID: \"03e0b815-7641-435c-9934-05f5c5307962\") " Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.206875 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjczf\" (UniqueName: \"kubernetes.io/projected/9e9a8b01-3875-4489-8598-377dfdac550f-kube-api-access-hjczf\") pod \"9e9a8b01-3875-4489-8598-377dfdac550f\" (UID: \"9e9a8b01-3875-4489-8598-377dfdac550f\") " Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.206927 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9e9a8b01-3875-4489-8598-377dfdac550f-fernet-keys\") pod \"9e9a8b01-3875-4489-8598-377dfdac550f\" (UID: \"9e9a8b01-3875-4489-8598-377dfdac550f\") " Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.207216 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcc8t\" (UniqueName: \"kubernetes.io/projected/03e0b815-7641-435c-9934-05f5c5307962-kube-api-access-hcc8t\") pod \"03e0b815-7641-435c-9934-05f5c5307962\" (UID: \"03e0b815-7641-435c-9934-05f5c5307962\") " Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.207489 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e9a8b01-3875-4489-8598-377dfdac550f-scripts\") pod \"9e9a8b01-3875-4489-8598-377dfdac550f\" (UID: \"9e9a8b01-3875-4489-8598-377dfdac550f\") " Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.207628 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e9a8b01-3875-4489-8598-377dfdac550f-combined-ca-bundle\") pod \"9e9a8b01-3875-4489-8598-377dfdac550f\" (UID: \"9e9a8b01-3875-4489-8598-377dfdac550f\") " Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.207722 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e9a8b01-3875-4489-8598-377dfdac550f-config-data\") pod \"9e9a8b01-3875-4489-8598-377dfdac550f\" (UID: \"9e9a8b01-3875-4489-8598-377dfdac550f\") " Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.207939 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03e0b815-7641-435c-9934-05f5c5307962-combined-ca-bundle\") pod \"03e0b815-7641-435c-9934-05f5c5307962\" (UID: \"03e0b815-7641-435c-9934-05f5c5307962\") " Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.208026 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9e9a8b01-3875-4489-8598-377dfdac550f-credential-keys\") pod \"9e9a8b01-3875-4489-8598-377dfdac550f\" (UID: \"9e9a8b01-3875-4489-8598-377dfdac550f\") " Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.332221 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9a8b01-3875-4489-8598-377dfdac550f-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "9e9a8b01-3875-4489-8598-377dfdac550f" (UID: "9e9a8b01-3875-4489-8598-377dfdac550f"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.332350 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9a8b01-3875-4489-8598-377dfdac550f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9e9a8b01-3875-4489-8598-377dfdac550f" (UID: "9e9a8b01-3875-4489-8598-377dfdac550f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.332485 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03e0b815-7641-435c-9934-05f5c5307962-kube-api-access-hcc8t" (OuterVolumeSpecName: "kube-api-access-hcc8t") pod "03e0b815-7641-435c-9934-05f5c5307962" (UID: "03e0b815-7641-435c-9934-05f5c5307962"). InnerVolumeSpecName "kube-api-access-hcc8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.332685 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03e0b815-7641-435c-9934-05f5c5307962-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "03e0b815-7641-435c-9934-05f5c5307962" (UID: "03e0b815-7641-435c-9934-05f5c5307962"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.334595 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9a8b01-3875-4489-8598-377dfdac550f-kube-api-access-hjczf" (OuterVolumeSpecName: "kube-api-access-hjczf") pod "9e9a8b01-3875-4489-8598-377dfdac550f" (UID: "9e9a8b01-3875-4489-8598-377dfdac550f"). InnerVolumeSpecName "kube-api-access-hjczf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.336709 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9a8b01-3875-4489-8598-377dfdac550f-scripts" (OuterVolumeSpecName: "scripts") pod "9e9a8b01-3875-4489-8598-377dfdac550f" (UID: "9e9a8b01-3875-4489-8598-377dfdac550f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.340554 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9a8b01-3875-4489-8598-377dfdac550f-config-data" (OuterVolumeSpecName: "config-data") pod "9e9a8b01-3875-4489-8598-377dfdac550f" (UID: "9e9a8b01-3875-4489-8598-377dfdac550f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.363422 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03e0b815-7641-435c-9934-05f5c5307962-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03e0b815-7641-435c-9934-05f5c5307962" (UID: "03e0b815-7641-435c-9934-05f5c5307962"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.370711 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9a8b01-3875-4489-8598-377dfdac550f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e9a8b01-3875-4489-8598-377dfdac550f" (UID: "9e9a8b01-3875-4489-8598-377dfdac550f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.453471 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03e0b815-7641-435c-9934-05f5c5307962-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.453535 4812 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9e9a8b01-3875-4489-8598-377dfdac550f-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.453548 4812 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/03e0b815-7641-435c-9934-05f5c5307962-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.469339 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03e0b815-7641-435c-9934-05f5c5307962-config-data" (OuterVolumeSpecName: "config-data") pod "03e0b815-7641-435c-9934-05f5c5307962" (UID: "03e0b815-7641-435c-9934-05f5c5307962"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.453565 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjczf\" (UniqueName: \"kubernetes.io/projected/9e9a8b01-3875-4489-8598-377dfdac550f-kube-api-access-hjczf\") on node \"crc\" DevicePath \"\"" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.472565 4812 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9e9a8b01-3875-4489-8598-377dfdac550f-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.472601 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcc8t\" (UniqueName: \"kubernetes.io/projected/03e0b815-7641-435c-9934-05f5c5307962-kube-api-access-hcc8t\") on node \"crc\" DevicePath \"\"" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.472615 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e9a8b01-3875-4489-8598-377dfdac550f-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.472631 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e9a8b01-3875-4489-8598-377dfdac550f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.472642 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e9a8b01-3875-4489-8598-377dfdac550f-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.555074 4812 patch_prober.go:28] interesting pod/machine-config-daemon-c6mn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.555206 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.576944 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03e0b815-7641-435c-9934-05f5c5307962-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.670813 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-khzht"] Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.687207 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-khzht"] Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.705763 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-hcgfc"] Feb 16 13:54:14 crc kubenswrapper[4812]: E0216 13:54:14.706379 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03e0b815-7641-435c-9934-05f5c5307962" containerName="glance-db-sync" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.706414 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="03e0b815-7641-435c-9934-05f5c5307962" containerName="glance-db-sync" Feb 16 13:54:14 crc kubenswrapper[4812]: E0216 13:54:14.706436 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceb8fdfb-dd06-417c-91db-9b6843d52984" containerName="init" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.706456 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceb8fdfb-dd06-417c-91db-9b6843d52984" containerName="init" Feb 16 13:54:14 crc kubenswrapper[4812]: E0216 13:54:14.706478 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5f831e6-8de4-498e-9337-0e9c274c2af6" containerName="init" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.706484 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5f831e6-8de4-498e-9337-0e9c274c2af6" containerName="init" Feb 16 13:54:14 crc kubenswrapper[4812]: E0216 13:54:14.706518 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e9a8b01-3875-4489-8598-377dfdac550f" containerName="keystone-bootstrap" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.706524 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e9a8b01-3875-4489-8598-377dfdac550f" containerName="keystone-bootstrap" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.706760 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5f831e6-8de4-498e-9337-0e9c274c2af6" containerName="init" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.706776 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e9a8b01-3875-4489-8598-377dfdac550f" containerName="keystone-bootstrap" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.706791 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="03e0b815-7641-435c-9934-05f5c5307962" containerName="glance-db-sync" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.706802 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceb8fdfb-dd06-417c-91db-9b6843d52984" containerName="init" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.707826 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hcgfc" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.728601 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hcgfc"] Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.786008 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b502458-ea63-4fa7-80b5-5812a46900f4-combined-ca-bundle\") pod \"keystone-bootstrap-hcgfc\" (UID: \"2b502458-ea63-4fa7-80b5-5812a46900f4\") " pod="openstack/keystone-bootstrap-hcgfc" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.786074 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b502458-ea63-4fa7-80b5-5812a46900f4-scripts\") pod \"keystone-bootstrap-hcgfc\" (UID: \"2b502458-ea63-4fa7-80b5-5812a46900f4\") " pod="openstack/keystone-bootstrap-hcgfc" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.786122 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2b502458-ea63-4fa7-80b5-5812a46900f4-fernet-keys\") pod \"keystone-bootstrap-hcgfc\" (UID: \"2b502458-ea63-4fa7-80b5-5812a46900f4\") " pod="openstack/keystone-bootstrap-hcgfc" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.786173 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j88n5\" (UniqueName: \"kubernetes.io/projected/2b502458-ea63-4fa7-80b5-5812a46900f4-kube-api-access-j88n5\") pod \"keystone-bootstrap-hcgfc\" (UID: \"2b502458-ea63-4fa7-80b5-5812a46900f4\") " pod="openstack/keystone-bootstrap-hcgfc" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.786212 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b502458-ea63-4fa7-80b5-5812a46900f4-config-data\") pod \"keystone-bootstrap-hcgfc\" (UID: \"2b502458-ea63-4fa7-80b5-5812a46900f4\") " pod="openstack/keystone-bootstrap-hcgfc" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.786229 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2b502458-ea63-4fa7-80b5-5812a46900f4-credential-keys\") pod \"keystone-bootstrap-hcgfc\" (UID: \"2b502458-ea63-4fa7-80b5-5812a46900f4\") " pod="openstack/keystone-bootstrap-hcgfc" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.887604 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b502458-ea63-4fa7-80b5-5812a46900f4-combined-ca-bundle\") pod \"keystone-bootstrap-hcgfc\" (UID: \"2b502458-ea63-4fa7-80b5-5812a46900f4\") " pod="openstack/keystone-bootstrap-hcgfc" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.887691 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b502458-ea63-4fa7-80b5-5812a46900f4-scripts\") pod \"keystone-bootstrap-hcgfc\" (UID: \"2b502458-ea63-4fa7-80b5-5812a46900f4\") " pod="openstack/keystone-bootstrap-hcgfc" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.887750 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2b502458-ea63-4fa7-80b5-5812a46900f4-fernet-keys\") pod \"keystone-bootstrap-hcgfc\" (UID: \"2b502458-ea63-4fa7-80b5-5812a46900f4\") " pod="openstack/keystone-bootstrap-hcgfc" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.887850 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j88n5\" (UniqueName: \"kubernetes.io/projected/2b502458-ea63-4fa7-80b5-5812a46900f4-kube-api-access-j88n5\") pod \"keystone-bootstrap-hcgfc\" (UID: \"2b502458-ea63-4fa7-80b5-5812a46900f4\") " pod="openstack/keystone-bootstrap-hcgfc" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.887963 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b502458-ea63-4fa7-80b5-5812a46900f4-config-data\") pod \"keystone-bootstrap-hcgfc\" (UID: \"2b502458-ea63-4fa7-80b5-5812a46900f4\") " pod="openstack/keystone-bootstrap-hcgfc" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.887996 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2b502458-ea63-4fa7-80b5-5812a46900f4-credential-keys\") pod \"keystone-bootstrap-hcgfc\" (UID: \"2b502458-ea63-4fa7-80b5-5812a46900f4\") " pod="openstack/keystone-bootstrap-hcgfc" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.897539 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b502458-ea63-4fa7-80b5-5812a46900f4-combined-ca-bundle\") pod \"keystone-bootstrap-hcgfc\" (UID: \"2b502458-ea63-4fa7-80b5-5812a46900f4\") " pod="openstack/keystone-bootstrap-hcgfc" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.906895 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2b502458-ea63-4fa7-80b5-5812a46900f4-credential-keys\") pod \"keystone-bootstrap-hcgfc\" (UID: \"2b502458-ea63-4fa7-80b5-5812a46900f4\") " pod="openstack/keystone-bootstrap-hcgfc" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.909133 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2b502458-ea63-4fa7-80b5-5812a46900f4-fernet-keys\") pod \"keystone-bootstrap-hcgfc\" (UID: \"2b502458-ea63-4fa7-80b5-5812a46900f4\") " pod="openstack/keystone-bootstrap-hcgfc" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.913036 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b502458-ea63-4fa7-80b5-5812a46900f4-scripts\") pod \"keystone-bootstrap-hcgfc\" (UID: \"2b502458-ea63-4fa7-80b5-5812a46900f4\") " pod="openstack/keystone-bootstrap-hcgfc" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.919241 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b502458-ea63-4fa7-80b5-5812a46900f4-config-data\") pod \"keystone-bootstrap-hcgfc\" (UID: \"2b502458-ea63-4fa7-80b5-5812a46900f4\") " pod="openstack/keystone-bootstrap-hcgfc" Feb 16 13:54:14 crc kubenswrapper[4812]: I0216 13:54:14.989390 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j88n5\" (UniqueName: \"kubernetes.io/projected/2b502458-ea63-4fa7-80b5-5812a46900f4-kube-api-access-j88n5\") pod \"keystone-bootstrap-hcgfc\" (UID: \"2b502458-ea63-4fa7-80b5-5812a46900f4\") " pod="openstack/keystone-bootstrap-hcgfc" Feb 16 13:54:15 crc kubenswrapper[4812]: I0216 13:54:15.040466 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hcgfc" Feb 16 13:54:15 crc kubenswrapper[4812]: I0216 13:54:15.162560 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-l5kzb"] Feb 16 13:54:15 crc kubenswrapper[4812]: I0216 13:54:15.162967 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58dd9ff6bc-l5kzb" podUID="20756ea3-8baf-4fba-92bc-7aa474e2bc0a" containerName="dnsmasq-dns" containerID="cri-o://fa9b1ec80a159bf3befd366e3859002075846afdd09adb7f87876632faf2e37e" gracePeriod=10 Feb 16 13:54:15 crc kubenswrapper[4812]: I0216 13:54:15.169519 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58dd9ff6bc-l5kzb" Feb 16 13:54:15 crc kubenswrapper[4812]: I0216 13:54:15.478653 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-khzht" Feb 16 13:54:15 crc kubenswrapper[4812]: I0216 13:54:15.479288 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mwzf9" Feb 16 13:54:15 crc kubenswrapper[4812]: I0216 13:54:15.606863 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-c4st6"] Feb 16 13:54:15 crc kubenswrapper[4812]: I0216 13:54:15.609419 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-c4st6" Feb 16 13:54:15 crc kubenswrapper[4812]: I0216 13:54:15.658844 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-c4st6"] Feb 16 13:54:15 crc kubenswrapper[4812]: I0216 13:54:15.796794 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-c4st6\" (UID: \"0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-c4st6" Feb 16 13:54:15 crc kubenswrapper[4812]: I0216 13:54:15.796918 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpxt2\" (UniqueName: \"kubernetes.io/projected/0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c-kube-api-access-mpxt2\") pod \"dnsmasq-dns-785d8bcb8c-c4st6\" (UID: \"0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-c4st6" Feb 16 13:54:15 crc kubenswrapper[4812]: I0216 13:54:15.796945 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-c4st6\" (UID: \"0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-c4st6" Feb 16 13:54:15 crc kubenswrapper[4812]: I0216 13:54:15.796969 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-c4st6\" (UID: \"0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-c4st6" Feb 16 13:54:15 crc kubenswrapper[4812]: I0216 13:54:15.797168 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c-config\") pod \"dnsmasq-dns-785d8bcb8c-c4st6\" (UID: \"0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-c4st6" Feb 16 13:54:15 crc kubenswrapper[4812]: I0216 13:54:15.797205 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-c4st6\" (UID: \"0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-c4st6" Feb 16 13:54:15 crc kubenswrapper[4812]: I0216 13:54:15.899215 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c-config\") pod \"dnsmasq-dns-785d8bcb8c-c4st6\" (UID: \"0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-c4st6" Feb 16 13:54:15 crc kubenswrapper[4812]: I0216 13:54:15.899276 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-c4st6\" (UID: \"0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-c4st6" Feb 16 13:54:15 crc kubenswrapper[4812]: I0216 13:54:15.899324 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-c4st6\" (UID: \"0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-c4st6" Feb 16 13:54:15 crc kubenswrapper[4812]: I0216 13:54:15.899376 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpxt2\" (UniqueName: \"kubernetes.io/projected/0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c-kube-api-access-mpxt2\") pod \"dnsmasq-dns-785d8bcb8c-c4st6\" (UID: \"0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-c4st6" Feb 16 13:54:15 crc kubenswrapper[4812]: I0216 13:54:15.899403 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-c4st6\" (UID: \"0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-c4st6" Feb 16 13:54:15 crc kubenswrapper[4812]: I0216 13:54:15.899428 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-c4st6\" (UID: \"0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-c4st6" Feb 16 13:54:15 crc kubenswrapper[4812]: I0216 13:54:15.900595 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-c4st6\" (UID: \"0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-c4st6" Feb 16 13:54:15 crc kubenswrapper[4812]: I0216 13:54:15.903924 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-c4st6\" (UID: \"0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-c4st6" Feb 16 13:54:15 crc kubenswrapper[4812]: I0216 13:54:15.904226 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-c4st6\" (UID: \"0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-c4st6" Feb 16 13:54:15 crc kubenswrapper[4812]: I0216 13:54:15.905138 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-c4st6\" (UID: \"0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-c4st6" Feb 16 13:54:15 crc kubenswrapper[4812]: I0216 13:54:15.905148 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c-config\") pod \"dnsmasq-dns-785d8bcb8c-c4st6\" (UID: \"0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-c4st6" Feb 16 13:54:16 crc kubenswrapper[4812]: I0216 13:54:16.090127 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9a8b01-3875-4489-8598-377dfdac550f" path="/var/lib/kubelet/pods/9e9a8b01-3875-4489-8598-377dfdac550f/volumes" Feb 16 13:54:16 crc kubenswrapper[4812]: I0216 13:54:16.108746 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpxt2\" (UniqueName: \"kubernetes.io/projected/0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c-kube-api-access-mpxt2\") pod \"dnsmasq-dns-785d8bcb8c-c4st6\" (UID: \"0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c\") " pod="openstack/dnsmasq-dns-785d8bcb8c-c4st6" Feb 16 13:54:16 crc kubenswrapper[4812]: I0216 13:54:16.137312 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:54:16 crc kubenswrapper[4812]: I0216 13:54:16.147339 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 13:54:16 crc kubenswrapper[4812]: I0216 13:54:16.152736 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 13:54:16 crc kubenswrapper[4812]: I0216 13:54:16.152767 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-v9qd8" Feb 16 13:54:16 crc kubenswrapper[4812]: I0216 13:54:16.157538 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 16 13:54:16 crc kubenswrapper[4812]: I0216 13:54:16.193613 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:54:16 crc kubenswrapper[4812]: I0216 13:54:16.305645 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2ac91dc-9185-46fe-9583-3355cb2be045-logs\") pod \"glance-default-external-api-0\" (UID: \"e2ac91dc-9185-46fe-9583-3355cb2be045\") " pod="openstack/glance-default-external-api-0" Feb 16 13:54:16 crc kubenswrapper[4812]: I0216 13:54:16.305769 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e2ac91dc-9185-46fe-9583-3355cb2be045-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e2ac91dc-9185-46fe-9583-3355cb2be045\") " pod="openstack/glance-default-external-api-0" Feb 16 13:54:16 crc kubenswrapper[4812]: I0216 13:54:16.305845 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9031b253-b5e3-4837-8f3a-d04b16b0567d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9031b253-b5e3-4837-8f3a-d04b16b0567d\") pod \"glance-default-external-api-0\" (UID: \"e2ac91dc-9185-46fe-9583-3355cb2be045\") " pod="openstack/glance-default-external-api-0" Feb 16 13:54:16 crc kubenswrapper[4812]: I0216 13:54:16.305873 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2ac91dc-9185-46fe-9583-3355cb2be045-config-data\") pod \"glance-default-external-api-0\" (UID: \"e2ac91dc-9185-46fe-9583-3355cb2be045\") " pod="openstack/glance-default-external-api-0" Feb 16 13:54:16 crc kubenswrapper[4812]: I0216 13:54:16.305898 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2ac91dc-9185-46fe-9583-3355cb2be045-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e2ac91dc-9185-46fe-9583-3355cb2be045\") " pod="openstack/glance-default-external-api-0" Feb 16 13:54:16 crc kubenswrapper[4812]: I0216 13:54:16.305923 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp9k9\" (UniqueName: \"kubernetes.io/projected/e2ac91dc-9185-46fe-9583-3355cb2be045-kube-api-access-pp9k9\") pod \"glance-default-external-api-0\" (UID: \"e2ac91dc-9185-46fe-9583-3355cb2be045\") " pod="openstack/glance-default-external-api-0" Feb 16 13:54:16 crc kubenswrapper[4812]: I0216 13:54:16.305966 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2ac91dc-9185-46fe-9583-3355cb2be045-scripts\") pod \"glance-default-external-api-0\" (UID: \"e2ac91dc-9185-46fe-9583-3355cb2be045\") " pod="openstack/glance-default-external-api-0" Feb 16 13:54:16 crc kubenswrapper[4812]: I0216 13:54:16.335387 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-c4st6" Feb 16 13:54:16 crc kubenswrapper[4812]: I0216 13:54:16.409017 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2ac91dc-9185-46fe-9583-3355cb2be045-logs\") pod \"glance-default-external-api-0\" (UID: \"e2ac91dc-9185-46fe-9583-3355cb2be045\") " pod="openstack/glance-default-external-api-0" Feb 16 13:54:16 crc kubenswrapper[4812]: I0216 13:54:16.409136 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e2ac91dc-9185-46fe-9583-3355cb2be045-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e2ac91dc-9185-46fe-9583-3355cb2be045\") " pod="openstack/glance-default-external-api-0" Feb 16 13:54:16 crc kubenswrapper[4812]: I0216 13:54:16.409204 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9031b253-b5e3-4837-8f3a-d04b16b0567d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9031b253-b5e3-4837-8f3a-d04b16b0567d\") pod \"glance-default-external-api-0\" (UID: \"e2ac91dc-9185-46fe-9583-3355cb2be045\") " pod="openstack/glance-default-external-api-0" Feb 16 13:54:16 crc kubenswrapper[4812]: I0216 13:54:16.409254 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2ac91dc-9185-46fe-9583-3355cb2be045-config-data\") pod \"glance-default-external-api-0\" (UID: \"e2ac91dc-9185-46fe-9583-3355cb2be045\") " pod="openstack/glance-default-external-api-0" Feb 16 13:54:16 crc kubenswrapper[4812]: I0216 13:54:16.409276 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2ac91dc-9185-46fe-9583-3355cb2be045-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e2ac91dc-9185-46fe-9583-3355cb2be045\") " pod="openstack/glance-default-external-api-0" Feb 16 13:54:16 crc kubenswrapper[4812]: I0216 13:54:16.409305 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp9k9\" (UniqueName: \"kubernetes.io/projected/e2ac91dc-9185-46fe-9583-3355cb2be045-kube-api-access-pp9k9\") pod \"glance-default-external-api-0\" (UID: \"e2ac91dc-9185-46fe-9583-3355cb2be045\") " pod="openstack/glance-default-external-api-0" Feb 16 13:54:16 crc kubenswrapper[4812]: I0216 13:54:16.409346 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2ac91dc-9185-46fe-9583-3355cb2be045-scripts\") pod \"glance-default-external-api-0\" (UID: \"e2ac91dc-9185-46fe-9583-3355cb2be045\") " pod="openstack/glance-default-external-api-0" Feb 16 13:54:16 crc kubenswrapper[4812]: I0216 13:54:16.410959 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2ac91dc-9185-46fe-9583-3355cb2be045-logs\") pod \"glance-default-external-api-0\" (UID: \"e2ac91dc-9185-46fe-9583-3355cb2be045\") " pod="openstack/glance-default-external-api-0" Feb 16 13:54:16 crc kubenswrapper[4812]: I0216 13:54:16.411617 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e2ac91dc-9185-46fe-9583-3355cb2be045-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e2ac91dc-9185-46fe-9583-3355cb2be045\") " pod="openstack/glance-default-external-api-0" Feb 16 13:54:16 crc kubenswrapper[4812]: I0216 13:54:16.415294 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2ac91dc-9185-46fe-9583-3355cb2be045-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e2ac91dc-9185-46fe-9583-3355cb2be045\") " pod="openstack/glance-default-external-api-0" Feb 16 13:54:16 crc kubenswrapper[4812]: I0216 13:54:16.416464 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2ac91dc-9185-46fe-9583-3355cb2be045-scripts\") pod \"glance-default-external-api-0\" (UID: \"e2ac91dc-9185-46fe-9583-3355cb2be045\") " pod="openstack/glance-default-external-api-0" Feb 16 13:54:16 crc kubenswrapper[4812]: I0216 13:54:16.417813 4812 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 13:54:16 crc kubenswrapper[4812]: I0216 13:54:16.417867 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9031b253-b5e3-4837-8f3a-d04b16b0567d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9031b253-b5e3-4837-8f3a-d04b16b0567d\") pod \"glance-default-external-api-0\" (UID: \"e2ac91dc-9185-46fe-9583-3355cb2be045\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4ce26190c4ae61da75993487dc8cd464b862eed00b3412abb1c020ef48a7c392/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 16 13:54:16 crc kubenswrapper[4812]: I0216 13:54:16.418034 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2ac91dc-9185-46fe-9583-3355cb2be045-config-data\") pod \"glance-default-external-api-0\" (UID: \"e2ac91dc-9185-46fe-9583-3355cb2be045\") " pod="openstack/glance-default-external-api-0" Feb 16 13:54:16 crc kubenswrapper[4812]: I0216 13:54:16.433510 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp9k9\" (UniqueName: \"kubernetes.io/projected/e2ac91dc-9185-46fe-9583-3355cb2be045-kube-api-access-pp9k9\") pod \"glance-default-external-api-0\" (UID: \"e2ac91dc-9185-46fe-9583-3355cb2be045\") " pod="openstack/glance-default-external-api-0" Feb 16 13:54:16 crc kubenswrapper[4812]: I0216 13:54:16.514107 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9031b253-b5e3-4837-8f3a-d04b16b0567d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9031b253-b5e3-4837-8f3a-d04b16b0567d\") pod \"glance-default-external-api-0\" (UID: \"e2ac91dc-9185-46fe-9583-3355cb2be045\") " pod="openstack/glance-default-external-api-0" Feb 16 13:54:16 crc kubenswrapper[4812]: I0216 13:54:16.531317 4812 generic.go:334] "Generic (PLEG): container finished" podID="20756ea3-8baf-4fba-92bc-7aa474e2bc0a" containerID="fa9b1ec80a159bf3befd366e3859002075846afdd09adb7f87876632faf2e37e" exitCode=0 Feb 16 13:54:16 crc kubenswrapper[4812]: I0216 13:54:16.531394 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-l5kzb" event={"ID":"20756ea3-8baf-4fba-92bc-7aa474e2bc0a","Type":"ContainerDied","Data":"fa9b1ec80a159bf3befd366e3859002075846afdd09adb7f87876632faf2e37e"} Feb 16 13:54:16 crc kubenswrapper[4812]: I0216 13:54:16.874018 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 13:54:17 crc kubenswrapper[4812]: I0216 13:54:17.141722 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:54:17 crc kubenswrapper[4812]: I0216 13:54:17.144504 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 13:54:17 crc kubenswrapper[4812]: I0216 13:54:17.149017 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 13:54:17 crc kubenswrapper[4812]: I0216 13:54:17.169528 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:54:17 crc kubenswrapper[4812]: I0216 13:54:17.283985 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d0e0e26-2608-436e-847d-d4bee61b1d85-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7d0e0e26-2608-436e-847d-d4bee61b1d85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:54:17 crc kubenswrapper[4812]: I0216 13:54:17.285413 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59hbz\" (UniqueName: \"kubernetes.io/projected/7d0e0e26-2608-436e-847d-d4bee61b1d85-kube-api-access-59hbz\") pod \"glance-default-internal-api-0\" (UID: \"7d0e0e26-2608-436e-847d-d4bee61b1d85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:54:17 crc kubenswrapper[4812]: I0216 13:54:17.285663 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2bcbbffc-f82a-4bc1-865f-aace29f12e67\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bcbbffc-f82a-4bc1-865f-aace29f12e67\") pod \"glance-default-internal-api-0\" (UID: \"7d0e0e26-2608-436e-847d-d4bee61b1d85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:54:17 crc kubenswrapper[4812]: I0216 13:54:17.285903 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d0e0e26-2608-436e-847d-d4bee61b1d85-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7d0e0e26-2608-436e-847d-d4bee61b1d85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:54:17 crc kubenswrapper[4812]: I0216 13:54:17.286130 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d0e0e26-2608-436e-847d-d4bee61b1d85-logs\") pod \"glance-default-internal-api-0\" (UID: \"7d0e0e26-2608-436e-847d-d4bee61b1d85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:54:17 crc kubenswrapper[4812]: I0216 13:54:17.286309 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7d0e0e26-2608-436e-847d-d4bee61b1d85-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7d0e0e26-2608-436e-847d-d4bee61b1d85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:54:17 crc kubenswrapper[4812]: I0216 13:54:17.286528 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d0e0e26-2608-436e-847d-d4bee61b1d85-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7d0e0e26-2608-436e-847d-d4bee61b1d85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:54:17 crc kubenswrapper[4812]: I0216 13:54:17.390016 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d0e0e26-2608-436e-847d-d4bee61b1d85-logs\") pod \"glance-default-internal-api-0\" (UID: \"7d0e0e26-2608-436e-847d-d4bee61b1d85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:54:17 crc kubenswrapper[4812]: I0216 13:54:17.390110 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7d0e0e26-2608-436e-847d-d4bee61b1d85-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7d0e0e26-2608-436e-847d-d4bee61b1d85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:54:17 crc kubenswrapper[4812]: I0216 13:54:17.390170 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d0e0e26-2608-436e-847d-d4bee61b1d85-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7d0e0e26-2608-436e-847d-d4bee61b1d85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:54:17 crc kubenswrapper[4812]: I0216 13:54:17.390214 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d0e0e26-2608-436e-847d-d4bee61b1d85-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7d0e0e26-2608-436e-847d-d4bee61b1d85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:54:17 crc kubenswrapper[4812]: I0216 13:54:17.390239 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59hbz\" (UniqueName: \"kubernetes.io/projected/7d0e0e26-2608-436e-847d-d4bee61b1d85-kube-api-access-59hbz\") pod \"glance-default-internal-api-0\" (UID: \"7d0e0e26-2608-436e-847d-d4bee61b1d85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:54:17 crc kubenswrapper[4812]: I0216 13:54:17.390278 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2bcbbffc-f82a-4bc1-865f-aace29f12e67\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bcbbffc-f82a-4bc1-865f-aace29f12e67\") pod \"glance-default-internal-api-0\" (UID: \"7d0e0e26-2608-436e-847d-d4bee61b1d85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:54:17 crc kubenswrapper[4812]: I0216 13:54:17.390361 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d0e0e26-2608-436e-847d-d4bee61b1d85-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7d0e0e26-2608-436e-847d-d4bee61b1d85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:54:17 crc kubenswrapper[4812]: I0216 13:54:17.390677 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d0e0e26-2608-436e-847d-d4bee61b1d85-logs\") pod \"glance-default-internal-api-0\" (UID: \"7d0e0e26-2608-436e-847d-d4bee61b1d85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:54:17 crc kubenswrapper[4812]: I0216 13:54:17.394591 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7d0e0e26-2608-436e-847d-d4bee61b1d85-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7d0e0e26-2608-436e-847d-d4bee61b1d85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:54:17 crc kubenswrapper[4812]: I0216 13:54:17.408480 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d0e0e26-2608-436e-847d-d4bee61b1d85-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7d0e0e26-2608-436e-847d-d4bee61b1d85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:54:17 crc kubenswrapper[4812]: I0216 13:54:17.409601 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d0e0e26-2608-436e-847d-d4bee61b1d85-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7d0e0e26-2608-436e-847d-d4bee61b1d85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:54:17 crc kubenswrapper[4812]: I0216 13:54:17.422518 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59hbz\" (UniqueName: \"kubernetes.io/projected/7d0e0e26-2608-436e-847d-d4bee61b1d85-kube-api-access-59hbz\") pod \"glance-default-internal-api-0\" (UID: \"7d0e0e26-2608-436e-847d-d4bee61b1d85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:54:17 crc kubenswrapper[4812]: I0216 13:54:17.436911 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d0e0e26-2608-436e-847d-d4bee61b1d85-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7d0e0e26-2608-436e-847d-d4bee61b1d85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:54:17 crc kubenswrapper[4812]: I0216 13:54:17.528353 4812 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 13:54:17 crc kubenswrapper[4812]: I0216 13:54:17.528892 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2bcbbffc-f82a-4bc1-865f-aace29f12e67\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bcbbffc-f82a-4bc1-865f-aace29f12e67\") pod \"glance-default-internal-api-0\" (UID: \"7d0e0e26-2608-436e-847d-d4bee61b1d85\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6691502de4876dbd0d40188b23458c72f9080870e675ce533942e270fddd7230/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 16 13:54:17 crc kubenswrapper[4812]: I0216 13:54:17.683899 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-l5kzb" podUID="20756ea3-8baf-4fba-92bc-7aa474e2bc0a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.163:5353: connect: connection refused" Feb 16 13:54:17 crc kubenswrapper[4812]: I0216 13:54:17.831797 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2bcbbffc-f82a-4bc1-865f-aace29f12e67\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bcbbffc-f82a-4bc1-865f-aace29f12e67\") pod \"glance-default-internal-api-0\" (UID: \"7d0e0e26-2608-436e-847d-d4bee61b1d85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:54:18 crc kubenswrapper[4812]: I0216 13:54:18.081523 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 13:54:19 crc kubenswrapper[4812]: E0216 13:54:19.036382 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 13:54:19 crc kubenswrapper[4812]: E0216 13:54:19.036990 4812 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 13:54:19 crc kubenswrapper[4812]: E0216 13:54:19.037255 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mq54s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-krnzs_openstack(a7d4eae6-781f-4675-a6c3-ee0f1589c735): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 13:54:19 crc kubenswrapper[4812]: E0216 13:54:19.038473 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 13:54:19 crc kubenswrapper[4812]: I0216 13:54:19.972856 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:54:20 crc kubenswrapper[4812]: I0216 13:54:20.073311 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:54:22 crc kubenswrapper[4812]: E0216 13:54:22.548481 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 16 13:54:22 crc kubenswrapper[4812]: E0216 13:54:22.549361 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n559hd4hbfhc6hd9hc6h665h555hbbh694h5b6hf4h78h5ddh55dhb8h85h76h78h7dhd9h64dhd7h58fh54dh5c4h85h6fh659h7dhc8h559q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fxsq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(4c359d03-e59e-4b85-8599-826a340acc8f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 13:54:22 crc kubenswrapper[4812]: I0216 13:54:22.682805 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-l5kzb" podUID="20756ea3-8baf-4fba-92bc-7aa474e2bc0a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.163:5353: connect: connection refused" Feb 16 13:54:27 crc kubenswrapper[4812]: E0216 13:54:27.524026 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34" Feb 16 13:54:27 crc kubenswrapper[4812]: E0216 13:54:27.525127 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:thanos-sidecar,Image:registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34,Command:[],Args:[sidecar --prometheus.url=http://localhost:9090/ --grpc-address=:10901 --http-address=:10902 --log.level=info --prometheus.http-client-file=/etc/thanos/config/prometheus.http-client-file.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:10902,Protocol:TCP,HostIP:,},ContainerPort{Name:grpc,HostPort:0,ContainerPort:10901,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:thanos-prometheus-http-client-file,ReadOnly:false,MountPath:/etc/thanos/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nkzm4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(e02a9868-e12c-4a65-9ba5-4a5965131b5b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 13:54:27 crc kubenswrapper[4812]: E0216 13:54:27.526379 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/prometheus-metric-storage-0" podUID="e02a9868-e12c-4a65-9ba5-4a5965131b5b" Feb 16 13:54:27 crc kubenswrapper[4812]: E0216 13:54:27.872014 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="e02a9868-e12c-4a65-9ba5-4a5965131b5b" Feb 16 13:54:29 crc kubenswrapper[4812]: E0216 13:54:29.892300 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 13:54:30 crc kubenswrapper[4812]: I0216 13:54:30.458133 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 16 13:54:30 crc kubenswrapper[4812]: E0216 13:54:30.463382 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="e02a9868-e12c-4a65-9ba5-4a5965131b5b" Feb 16 13:54:32 crc kubenswrapper[4812]: I0216 13:54:32.683515 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-l5kzb" podUID="20756ea3-8baf-4fba-92bc-7aa474e2bc0a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.163:5353: i/o timeout" Feb 16 13:54:32 crc kubenswrapper[4812]: I0216 13:54:32.684866 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-l5kzb" Feb 16 13:54:35 crc kubenswrapper[4812]: I0216 13:54:35.094901 4812 generic.go:334] "Generic (PLEG): container finished" podID="a35f33f0-33ff-4938-b15a-455a830ac631" containerID="4f93cb8c7224bf58c7d9140abffaf7b9a8aea79dd27a2b796acaaf74c8817355" exitCode=0 Feb 16 13:54:35 crc kubenswrapper[4812]: I0216 13:54:35.095006 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6q6x6" event={"ID":"a35f33f0-33ff-4938-b15a-455a830ac631","Type":"ContainerDied","Data":"4f93cb8c7224bf58c7d9140abffaf7b9a8aea79dd27a2b796acaaf74c8817355"} Feb 16 13:54:37 crc kubenswrapper[4812]: I0216 13:54:37.685438 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-l5kzb" podUID="20756ea3-8baf-4fba-92bc-7aa474e2bc0a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.163:5353: i/o timeout" Feb 16 13:54:40 crc kubenswrapper[4812]: I0216 13:54:40.227514 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 16 13:54:40 crc kubenswrapper[4812]: I0216 13:54:40.232715 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 13:54:40 crc kubenswrapper[4812]: I0216 13:54:40.236882 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 16 13:54:42 crc kubenswrapper[4812]: I0216 13:54:42.686858 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-l5kzb" podUID="20756ea3-8baf-4fba-92bc-7aa474e2bc0a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.163:5353: i/o timeout" Feb 16 13:54:43 crc kubenswrapper[4812]: E0216 13:54:43.009572 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 13:54:43 crc kubenswrapper[4812]: E0216 13:54:43.010089 4812 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 13:54:43 crc kubenswrapper[4812]: E0216 13:54:43.010315 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mq54s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-krnzs_openstack(a7d4eae6-781f-4675-a6c3-ee0f1589c735): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 13:54:43 crc kubenswrapper[4812]: E0216 13:54:43.011614 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 13:54:43 crc kubenswrapper[4812]: E0216 13:54:43.202838 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 16 13:54:43 crc kubenswrapper[4812]: E0216 13:54:43.203128 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qz7gm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-p4bgr_openstack(dd76f722-eb61-4676-9456-9a9bb443ef16): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 13:54:43 crc kubenswrapper[4812]: E0216 13:54:43.204685 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-p4bgr" podUID="dd76f722-eb61-4676-9456-9a9bb443ef16" Feb 16 13:54:43 crc kubenswrapper[4812]: I0216 13:54:43.418808 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-l5kzb" Feb 16 13:54:43 crc kubenswrapper[4812]: I0216 13:54:43.441805 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6q6x6" Feb 16 13:54:43 crc kubenswrapper[4812]: I0216 13:54:43.496531 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20756ea3-8baf-4fba-92bc-7aa474e2bc0a-ovsdbserver-nb\") pod \"20756ea3-8baf-4fba-92bc-7aa474e2bc0a\" (UID: \"20756ea3-8baf-4fba-92bc-7aa474e2bc0a\") " Feb 16 13:54:43 crc kubenswrapper[4812]: I0216 13:54:43.496803 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20756ea3-8baf-4fba-92bc-7aa474e2bc0a-dns-svc\") pod \"20756ea3-8baf-4fba-92bc-7aa474e2bc0a\" (UID: \"20756ea3-8baf-4fba-92bc-7aa474e2bc0a\") " Feb 16 13:54:43 crc kubenswrapper[4812]: I0216 13:54:43.496902 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rt295\" (UniqueName: \"kubernetes.io/projected/20756ea3-8baf-4fba-92bc-7aa474e2bc0a-kube-api-access-rt295\") pod \"20756ea3-8baf-4fba-92bc-7aa474e2bc0a\" (UID: \"20756ea3-8baf-4fba-92bc-7aa474e2bc0a\") " Feb 16 13:54:43 crc kubenswrapper[4812]: I0216 13:54:43.496997 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20756ea3-8baf-4fba-92bc-7aa474e2bc0a-config\") pod \"20756ea3-8baf-4fba-92bc-7aa474e2bc0a\" (UID: \"20756ea3-8baf-4fba-92bc-7aa474e2bc0a\") " Feb 16 13:54:43 crc kubenswrapper[4812]: I0216 13:54:43.497099 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/20756ea3-8baf-4fba-92bc-7aa474e2bc0a-ovsdbserver-sb\") pod \"20756ea3-8baf-4fba-92bc-7aa474e2bc0a\" (UID: \"20756ea3-8baf-4fba-92bc-7aa474e2bc0a\") " Feb 16 13:54:43 crc kubenswrapper[4812]: I0216 13:54:43.497277 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/20756ea3-8baf-4fba-92bc-7aa474e2bc0a-dns-swift-storage-0\") pod \"20756ea3-8baf-4fba-92bc-7aa474e2bc0a\" (UID: \"20756ea3-8baf-4fba-92bc-7aa474e2bc0a\") " Feb 16 13:54:43 crc kubenswrapper[4812]: I0216 13:54:43.505884 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20756ea3-8baf-4fba-92bc-7aa474e2bc0a-kube-api-access-rt295" (OuterVolumeSpecName: "kube-api-access-rt295") pod "20756ea3-8baf-4fba-92bc-7aa474e2bc0a" (UID: "20756ea3-8baf-4fba-92bc-7aa474e2bc0a"). InnerVolumeSpecName "kube-api-access-rt295". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:54:43 crc kubenswrapper[4812]: I0216 13:54:43.582405 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20756ea3-8baf-4fba-92bc-7aa474e2bc0a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "20756ea3-8baf-4fba-92bc-7aa474e2bc0a" (UID: "20756ea3-8baf-4fba-92bc-7aa474e2bc0a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:54:43 crc kubenswrapper[4812]: I0216 13:54:43.586765 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20756ea3-8baf-4fba-92bc-7aa474e2bc0a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "20756ea3-8baf-4fba-92bc-7aa474e2bc0a" (UID: "20756ea3-8baf-4fba-92bc-7aa474e2bc0a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:54:43 crc kubenswrapper[4812]: I0216 13:54:43.599005 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20756ea3-8baf-4fba-92bc-7aa474e2bc0a-config" (OuterVolumeSpecName: "config") pod "20756ea3-8baf-4fba-92bc-7aa474e2bc0a" (UID: "20756ea3-8baf-4fba-92bc-7aa474e2bc0a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:54:43 crc kubenswrapper[4812]: I0216 13:54:43.602143 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20756ea3-8baf-4fba-92bc-7aa474e2bc0a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "20756ea3-8baf-4fba-92bc-7aa474e2bc0a" (UID: "20756ea3-8baf-4fba-92bc-7aa474e2bc0a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:54:43 crc kubenswrapper[4812]: I0216 13:54:43.602392 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a35f33f0-33ff-4938-b15a-455a830ac631-combined-ca-bundle\") pod \"a35f33f0-33ff-4938-b15a-455a830ac631\" (UID: \"a35f33f0-33ff-4938-b15a-455a830ac631\") " Feb 16 13:54:43 crc kubenswrapper[4812]: I0216 13:54:43.602656 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtf4b\" (UniqueName: \"kubernetes.io/projected/a35f33f0-33ff-4938-b15a-455a830ac631-kube-api-access-rtf4b\") pod \"a35f33f0-33ff-4938-b15a-455a830ac631\" (UID: \"a35f33f0-33ff-4938-b15a-455a830ac631\") " Feb 16 13:54:43 crc kubenswrapper[4812]: I0216 13:54:43.602865 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a35f33f0-33ff-4938-b15a-455a830ac631-config\") pod \"a35f33f0-33ff-4938-b15a-455a830ac631\" (UID: \"a35f33f0-33ff-4938-b15a-455a830ac631\") " Feb 16 13:54:43 crc kubenswrapper[4812]: I0216 13:54:43.602944 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20756ea3-8baf-4fba-92bc-7aa474e2bc0a-config\") pod \"20756ea3-8baf-4fba-92bc-7aa474e2bc0a\" (UID: \"20756ea3-8baf-4fba-92bc-7aa474e2bc0a\") " Feb 16 13:54:43 crc kubenswrapper[4812]: I0216 13:54:43.603224 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/20756ea3-8baf-4fba-92bc-7aa474e2bc0a-dns-swift-storage-0\") pod \"20756ea3-8baf-4fba-92bc-7aa474e2bc0a\" (UID: \"20756ea3-8baf-4fba-92bc-7aa474e2bc0a\") " Feb 16 13:54:43 crc kubenswrapper[4812]: I0216 13:54:43.603333 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20756ea3-8baf-4fba-92bc-7aa474e2bc0a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "20756ea3-8baf-4fba-92bc-7aa474e2bc0a" (UID: "20756ea3-8baf-4fba-92bc-7aa474e2bc0a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:54:43 crc kubenswrapper[4812]: W0216 13:54:43.603742 4812 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/20756ea3-8baf-4fba-92bc-7aa474e2bc0a/volumes/kubernetes.io~configmap/config Feb 16 13:54:43 crc kubenswrapper[4812]: W0216 13:54:43.603929 4812 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/20756ea3-8baf-4fba-92bc-7aa474e2bc0a/volumes/kubernetes.io~configmap/dns-swift-storage-0 Feb 16 13:54:43 crc kubenswrapper[4812]: I0216 13:54:43.603968 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20756ea3-8baf-4fba-92bc-7aa474e2bc0a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "20756ea3-8baf-4fba-92bc-7aa474e2bc0a" (UID: "20756ea3-8baf-4fba-92bc-7aa474e2bc0a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:54:43 crc kubenswrapper[4812]: I0216 13:54:43.603918 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20756ea3-8baf-4fba-92bc-7aa474e2bc0a-config" (OuterVolumeSpecName: "config") pod "20756ea3-8baf-4fba-92bc-7aa474e2bc0a" (UID: "20756ea3-8baf-4fba-92bc-7aa474e2bc0a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:54:43 crc kubenswrapper[4812]: I0216 13:54:43.605192 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rt295\" (UniqueName: \"kubernetes.io/projected/20756ea3-8baf-4fba-92bc-7aa474e2bc0a-kube-api-access-rt295\") on node \"crc\" DevicePath \"\"" Feb 16 13:54:43 crc kubenswrapper[4812]: I0216 13:54:43.605338 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20756ea3-8baf-4fba-92bc-7aa474e2bc0a-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:54:43 crc kubenswrapper[4812]: I0216 13:54:43.605411 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/20756ea3-8baf-4fba-92bc-7aa474e2bc0a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 13:54:43 crc kubenswrapper[4812]: I0216 13:54:43.605548 4812 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/20756ea3-8baf-4fba-92bc-7aa474e2bc0a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:54:43 crc kubenswrapper[4812]: I0216 13:54:43.605610 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20756ea3-8baf-4fba-92bc-7aa474e2bc0a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 13:54:43 crc kubenswrapper[4812]: I0216 13:54:43.605680 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20756ea3-8baf-4fba-92bc-7aa474e2bc0a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:54:43 crc kubenswrapper[4812]: I0216 13:54:43.625719 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a35f33f0-33ff-4938-b15a-455a830ac631-kube-api-access-rtf4b" (OuterVolumeSpecName: "kube-api-access-rtf4b") pod "a35f33f0-33ff-4938-b15a-455a830ac631" (UID: "a35f33f0-33ff-4938-b15a-455a830ac631"). InnerVolumeSpecName "kube-api-access-rtf4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:54:43 crc kubenswrapper[4812]: I0216 13:54:43.640092 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a35f33f0-33ff-4938-b15a-455a830ac631-config" (OuterVolumeSpecName: "config") pod "a35f33f0-33ff-4938-b15a-455a830ac631" (UID: "a35f33f0-33ff-4938-b15a-455a830ac631"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:54:43 crc kubenswrapper[4812]: I0216 13:54:43.653088 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a35f33f0-33ff-4938-b15a-455a830ac631-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a35f33f0-33ff-4938-b15a-455a830ac631" (UID: "a35f33f0-33ff-4938-b15a-455a830ac631"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:54:43 crc kubenswrapper[4812]: I0216 13:54:43.714497 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a35f33f0-33ff-4938-b15a-455a830ac631-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:54:43 crc kubenswrapper[4812]: I0216 13:54:43.714638 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtf4b\" (UniqueName: \"kubernetes.io/projected/a35f33f0-33ff-4938-b15a-455a830ac631-kube-api-access-rtf4b\") on node \"crc\" DevicePath \"\"" Feb 16 13:54:43 crc kubenswrapper[4812]: I0216 13:54:43.714673 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a35f33f0-33ff-4938-b15a-455a830ac631-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:54:44 crc kubenswrapper[4812]: I0216 13:54:44.213619 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6q6x6" event={"ID":"a35f33f0-33ff-4938-b15a-455a830ac631","Type":"ContainerDied","Data":"457fd8b93d42f1a4a6424d7c4c5f0f25552add40d76b19450a0f67074e6ff355"} Feb 16 13:54:44 crc kubenswrapper[4812]: I0216 13:54:44.213700 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="457fd8b93d42f1a4a6424d7c4c5f0f25552add40d76b19450a0f67074e6ff355" Feb 16 13:54:44 crc kubenswrapper[4812]: I0216 13:54:44.213720 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6q6x6" Feb 16 13:54:44 crc kubenswrapper[4812]: I0216 13:54:44.223537 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-l5kzb" Feb 16 13:54:44 crc kubenswrapper[4812]: I0216 13:54:44.225013 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-l5kzb" event={"ID":"20756ea3-8baf-4fba-92bc-7aa474e2bc0a","Type":"ContainerDied","Data":"d66a1fa4aeadd42d3efef7e5a78160db092f36292c071b79fc5031385ff5d784"} Feb 16 13:54:44 crc kubenswrapper[4812]: I0216 13:54:44.225084 4812 scope.go:117] "RemoveContainer" containerID="fa9b1ec80a159bf3befd366e3859002075846afdd09adb7f87876632faf2e37e" Feb 16 13:54:44 crc kubenswrapper[4812]: E0216 13:54:44.229010 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-p4bgr" podUID="dd76f722-eb61-4676-9456-9a9bb443ef16" Feb 16 13:54:44 crc kubenswrapper[4812]: I0216 13:54:44.304632 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-l5kzb"] Feb 16 13:54:44 crc kubenswrapper[4812]: I0216 13:54:44.315948 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-l5kzb"] Feb 16 13:54:44 crc kubenswrapper[4812]: I0216 13:54:44.555200 4812 patch_prober.go:28] interesting pod/machine-config-daemon-c6mn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:54:44 crc kubenswrapper[4812]: I0216 13:54:44.555297 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:54:44 crc kubenswrapper[4812]: I0216 13:54:44.555374 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" Feb 16 13:54:44 crc kubenswrapper[4812]: I0216 13:54:44.556737 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e326161e933a75a00a9297a9e1cbd3d6a1ed2f661892851e02b5e7109aebd29d"} pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 13:54:44 crc kubenswrapper[4812]: I0216 13:54:44.556831 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" containerID="cri-o://e326161e933a75a00a9297a9e1cbd3d6a1ed2f661892851e02b5e7109aebd29d" gracePeriod=600 Feb 16 13:54:44 crc kubenswrapper[4812]: I0216 13:54:44.921135 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-c4st6"] Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.039944 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-4ww9m"] Feb 16 13:54:45 crc kubenswrapper[4812]: E0216 13:54:45.040805 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a35f33f0-33ff-4938-b15a-455a830ac631" containerName="neutron-db-sync" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.040844 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a35f33f0-33ff-4938-b15a-455a830ac631" containerName="neutron-db-sync" Feb 16 13:54:45 crc kubenswrapper[4812]: E0216 13:54:45.040877 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20756ea3-8baf-4fba-92bc-7aa474e2bc0a" containerName="init" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.040888 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="20756ea3-8baf-4fba-92bc-7aa474e2bc0a" containerName="init" Feb 16 13:54:45 crc kubenswrapper[4812]: E0216 13:54:45.040908 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20756ea3-8baf-4fba-92bc-7aa474e2bc0a" containerName="dnsmasq-dns" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.040916 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="20756ea3-8baf-4fba-92bc-7aa474e2bc0a" containerName="dnsmasq-dns" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.041140 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a35f33f0-33ff-4938-b15a-455a830ac631" containerName="neutron-db-sync" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.041163 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="20756ea3-8baf-4fba-92bc-7aa474e2bc0a" containerName="dnsmasq-dns" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.044536 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-4ww9m" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.089128 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-4ww9m"] Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.123878 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-86c4db556-7x7cc"] Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.126714 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86c4db556-7x7cc" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.138458 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.138943 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-w28r7" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.139634 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.139836 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.180420 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1492db35-d6ea-4d34-b29a-6d5537694379-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-4ww9m\" (UID: \"1492db35-d6ea-4d34-b29a-6d5537694379\") " pod="openstack/dnsmasq-dns-55f844cf75-4ww9m" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.180556 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1492db35-d6ea-4d34-b29a-6d5537694379-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-4ww9m\" (UID: \"1492db35-d6ea-4d34-b29a-6d5537694379\") " pod="openstack/dnsmasq-dns-55f844cf75-4ww9m" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.180595 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1492db35-d6ea-4d34-b29a-6d5537694379-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-4ww9m\" (UID: \"1492db35-d6ea-4d34-b29a-6d5537694379\") " pod="openstack/dnsmasq-dns-55f844cf75-4ww9m" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.180631 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1492db35-d6ea-4d34-b29a-6d5537694379-dns-svc\") pod \"dnsmasq-dns-55f844cf75-4ww9m\" (UID: \"1492db35-d6ea-4d34-b29a-6d5537694379\") " pod="openstack/dnsmasq-dns-55f844cf75-4ww9m" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.180735 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn2ld\" (UniqueName: \"kubernetes.io/projected/1492db35-d6ea-4d34-b29a-6d5537694379-kube-api-access-vn2ld\") pod \"dnsmasq-dns-55f844cf75-4ww9m\" (UID: \"1492db35-d6ea-4d34-b29a-6d5537694379\") " pod="openstack/dnsmasq-dns-55f844cf75-4ww9m" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.180763 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1492db35-d6ea-4d34-b29a-6d5537694379-config\") pod \"dnsmasq-dns-55f844cf75-4ww9m\" (UID: \"1492db35-d6ea-4d34-b29a-6d5537694379\") " pod="openstack/dnsmasq-dns-55f844cf75-4ww9m" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.188173 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-86c4db556-7x7cc"] Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.232434 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.263825 4812 generic.go:334] "Generic (PLEG): container finished" podID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerID="e326161e933a75a00a9297a9e1cbd3d6a1ed2f661892851e02b5e7109aebd29d" exitCode=0 Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.263954 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" event={"ID":"3c55e49a-a30d-4950-a690-c33d9f8a31e0","Type":"ContainerDied","Data":"e326161e933a75a00a9297a9e1cbd3d6a1ed2f661892851e02b5e7109aebd29d"} Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.292955 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/735893be-02d4-49a0-af55-787ea0f940cb-config\") pod \"neutron-86c4db556-7x7cc\" (UID: \"735893be-02d4-49a0-af55-787ea0f940cb\") " pod="openstack/neutron-86c4db556-7x7cc" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.293078 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1492db35-d6ea-4d34-b29a-6d5537694379-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-4ww9m\" (UID: \"1492db35-d6ea-4d34-b29a-6d5537694379\") " pod="openstack/dnsmasq-dns-55f844cf75-4ww9m" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.293112 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/735893be-02d4-49a0-af55-787ea0f940cb-httpd-config\") pod \"neutron-86c4db556-7x7cc\" (UID: \"735893be-02d4-49a0-af55-787ea0f940cb\") " pod="openstack/neutron-86c4db556-7x7cc" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.293177 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1492db35-d6ea-4d34-b29a-6d5537694379-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-4ww9m\" (UID: \"1492db35-d6ea-4d34-b29a-6d5537694379\") " pod="openstack/dnsmasq-dns-55f844cf75-4ww9m" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.293203 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/735893be-02d4-49a0-af55-787ea0f940cb-combined-ca-bundle\") pod \"neutron-86c4db556-7x7cc\" (UID: \"735893be-02d4-49a0-af55-787ea0f940cb\") " pod="openstack/neutron-86c4db556-7x7cc" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.293244 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1492db35-d6ea-4d34-b29a-6d5537694379-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-4ww9m\" (UID: \"1492db35-d6ea-4d34-b29a-6d5537694379\") " pod="openstack/dnsmasq-dns-55f844cf75-4ww9m" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.293273 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1492db35-d6ea-4d34-b29a-6d5537694379-dns-svc\") pod \"dnsmasq-dns-55f844cf75-4ww9m\" (UID: \"1492db35-d6ea-4d34-b29a-6d5537694379\") " pod="openstack/dnsmasq-dns-55f844cf75-4ww9m" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.293305 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6pb9\" (UniqueName: \"kubernetes.io/projected/735893be-02d4-49a0-af55-787ea0f940cb-kube-api-access-f6pb9\") pod \"neutron-86c4db556-7x7cc\" (UID: \"735893be-02d4-49a0-af55-787ea0f940cb\") " pod="openstack/neutron-86c4db556-7x7cc" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.293375 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/735893be-02d4-49a0-af55-787ea0f940cb-ovndb-tls-certs\") pod \"neutron-86c4db556-7x7cc\" (UID: \"735893be-02d4-49a0-af55-787ea0f940cb\") " pod="openstack/neutron-86c4db556-7x7cc" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.293407 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn2ld\" (UniqueName: \"kubernetes.io/projected/1492db35-d6ea-4d34-b29a-6d5537694379-kube-api-access-vn2ld\") pod \"dnsmasq-dns-55f844cf75-4ww9m\" (UID: \"1492db35-d6ea-4d34-b29a-6d5537694379\") " pod="openstack/dnsmasq-dns-55f844cf75-4ww9m" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.293464 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1492db35-d6ea-4d34-b29a-6d5537694379-config\") pod \"dnsmasq-dns-55f844cf75-4ww9m\" (UID: \"1492db35-d6ea-4d34-b29a-6d5537694379\") " pod="openstack/dnsmasq-dns-55f844cf75-4ww9m" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.295046 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1492db35-d6ea-4d34-b29a-6d5537694379-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-4ww9m\" (UID: \"1492db35-d6ea-4d34-b29a-6d5537694379\") " pod="openstack/dnsmasq-dns-55f844cf75-4ww9m" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.298364 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1492db35-d6ea-4d34-b29a-6d5537694379-dns-svc\") pod \"dnsmasq-dns-55f844cf75-4ww9m\" (UID: \"1492db35-d6ea-4d34-b29a-6d5537694379\") " pod="openstack/dnsmasq-dns-55f844cf75-4ww9m" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.298484 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1492db35-d6ea-4d34-b29a-6d5537694379-config\") pod \"dnsmasq-dns-55f844cf75-4ww9m\" (UID: \"1492db35-d6ea-4d34-b29a-6d5537694379\") " pod="openstack/dnsmasq-dns-55f844cf75-4ww9m" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.299653 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1492db35-d6ea-4d34-b29a-6d5537694379-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-4ww9m\" (UID: \"1492db35-d6ea-4d34-b29a-6d5537694379\") " pod="openstack/dnsmasq-dns-55f844cf75-4ww9m" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.305557 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1492db35-d6ea-4d34-b29a-6d5537694379-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-4ww9m\" (UID: \"1492db35-d6ea-4d34-b29a-6d5537694379\") " pod="openstack/dnsmasq-dns-55f844cf75-4ww9m" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.359570 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn2ld\" (UniqueName: \"kubernetes.io/projected/1492db35-d6ea-4d34-b29a-6d5537694379-kube-api-access-vn2ld\") pod \"dnsmasq-dns-55f844cf75-4ww9m\" (UID: \"1492db35-d6ea-4d34-b29a-6d5537694379\") " pod="openstack/dnsmasq-dns-55f844cf75-4ww9m" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.396422 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/735893be-02d4-49a0-af55-787ea0f940cb-ovndb-tls-certs\") pod \"neutron-86c4db556-7x7cc\" (UID: \"735893be-02d4-49a0-af55-787ea0f940cb\") " pod="openstack/neutron-86c4db556-7x7cc" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.396626 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/735893be-02d4-49a0-af55-787ea0f940cb-config\") pod \"neutron-86c4db556-7x7cc\" (UID: \"735893be-02d4-49a0-af55-787ea0f940cb\") " pod="openstack/neutron-86c4db556-7x7cc" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.396691 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/735893be-02d4-49a0-af55-787ea0f940cb-httpd-config\") pod \"neutron-86c4db556-7x7cc\" (UID: \"735893be-02d4-49a0-af55-787ea0f940cb\") " pod="openstack/neutron-86c4db556-7x7cc" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.396752 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/735893be-02d4-49a0-af55-787ea0f940cb-combined-ca-bundle\") pod \"neutron-86c4db556-7x7cc\" (UID: \"735893be-02d4-49a0-af55-787ea0f940cb\") " pod="openstack/neutron-86c4db556-7x7cc" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.396798 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6pb9\" (UniqueName: \"kubernetes.io/projected/735893be-02d4-49a0-af55-787ea0f940cb-kube-api-access-f6pb9\") pod \"neutron-86c4db556-7x7cc\" (UID: \"735893be-02d4-49a0-af55-787ea0f940cb\") " pod="openstack/neutron-86c4db556-7x7cc" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.404631 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/735893be-02d4-49a0-af55-787ea0f940cb-ovndb-tls-certs\") pod \"neutron-86c4db556-7x7cc\" (UID: \"735893be-02d4-49a0-af55-787ea0f940cb\") " pod="openstack/neutron-86c4db556-7x7cc" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.411490 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/735893be-02d4-49a0-af55-787ea0f940cb-httpd-config\") pod \"neutron-86c4db556-7x7cc\" (UID: \"735893be-02d4-49a0-af55-787ea0f940cb\") " pod="openstack/neutron-86c4db556-7x7cc" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.423363 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-4ww9m" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.430085 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/735893be-02d4-49a0-af55-787ea0f940cb-combined-ca-bundle\") pod \"neutron-86c4db556-7x7cc\" (UID: \"735893be-02d4-49a0-af55-787ea0f940cb\") " pod="openstack/neutron-86c4db556-7x7cc" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.430474 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/735893be-02d4-49a0-af55-787ea0f940cb-config\") pod \"neutron-86c4db556-7x7cc\" (UID: \"735893be-02d4-49a0-af55-787ea0f940cb\") " pod="openstack/neutron-86c4db556-7x7cc" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.438040 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6pb9\" (UniqueName: \"kubernetes.io/projected/735893be-02d4-49a0-af55-787ea0f940cb-kube-api-access-f6pb9\") pod \"neutron-86c4db556-7x7cc\" (UID: \"735893be-02d4-49a0-af55-787ea0f940cb\") " pod="openstack/neutron-86c4db556-7x7cc" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.464377 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86c4db556-7x7cc" Feb 16 13:54:45 crc kubenswrapper[4812]: I0216 13:54:45.896112 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20756ea3-8baf-4fba-92bc-7aa474e2bc0a" path="/var/lib/kubelet/pods/20756ea3-8baf-4fba-92bc-7aa474e2bc0a/volumes" Feb 16 13:54:46 crc kubenswrapper[4812]: E0216 13:54:46.246293 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 16 13:54:46 crc kubenswrapper[4812]: E0216 13:54:46.246766 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6qxxk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-qj2kj_openstack(d9d0140e-e353-40a3-8970-5007408f4cb8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 13:54:46 crc kubenswrapper[4812]: E0216 13:54:46.247991 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-qj2kj" podUID="d9d0140e-e353-40a3-8970-5007408f4cb8" Feb 16 13:54:46 crc kubenswrapper[4812]: E0216 13:54:46.333944 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-qj2kj" podUID="d9d0140e-e353-40a3-8970-5007408f4cb8" Feb 16 13:54:46 crc kubenswrapper[4812]: I0216 13:54:46.875894 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hcgfc"] Feb 16 13:54:47 crc kubenswrapper[4812]: E0216 13:54:47.013201 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified" Feb 16 13:54:47 crc kubenswrapper[4812]: E0216 13:54:47.013791 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-notification-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n559hd4hbfhc6hd9hc6h665h555hbbh694h5b6hf4h78h5ddh55dhb8h85h76h78h7dhd9h64dhd7h58fh54dh5c4h85h6fh659h7dhc8h559q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-notification-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fxsq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/notificationhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(4c359d03-e59e-4b85-8599-826a340acc8f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 13:54:47 crc kubenswrapper[4812]: I0216 13:54:47.047373 4812 scope.go:117] "RemoveContainer" containerID="ea29abf9c8be4ffb3128c6637f3f4ab0261e583d2ce6f7050d3ab615883bf7d5" Feb 16 13:54:47 crc kubenswrapper[4812]: W0216 13:54:47.097968 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b502458_ea63_4fa7_80b5_5812a46900f4.slice/crio-0f196114e24aac2913884fab58c865e615ce3a5e4f1b190d73fc51fd284b51e2 WatchSource:0}: Error finding container 0f196114e24aac2913884fab58c865e615ce3a5e4f1b190d73fc51fd284b51e2: Status 404 returned error can't find the container with id 0f196114e24aac2913884fab58c865e615ce3a5e4f1b190d73fc51fd284b51e2 Feb 16 13:54:47 crc kubenswrapper[4812]: I0216 13:54:47.446509 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hcgfc" event={"ID":"2b502458-ea63-4fa7-80b5-5812a46900f4","Type":"ContainerStarted","Data":"0f196114e24aac2913884fab58c865e615ce3a5e4f1b190d73fc51fd284b51e2"} Feb 16 13:54:47 crc kubenswrapper[4812]: I0216 13:54:47.484186 4812 scope.go:117] "RemoveContainer" containerID="0779ef9b368371eaae022df11f7e6d3b1b2344936b30d611f68295ab80bea825" Feb 16 13:54:47 crc kubenswrapper[4812]: I0216 13:54:47.689654 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-l5kzb" podUID="20756ea3-8baf-4fba-92bc-7aa474e2bc0a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.163:5353: i/o timeout" Feb 16 13:54:47 crc kubenswrapper[4812]: I0216 13:54:47.716384 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-c4st6"] Feb 16 13:54:47 crc kubenswrapper[4812]: W0216 13:54:47.836836 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0848f6ce_f0a3_4c2c_8fa5_9c763aacb68c.slice/crio-327e66feeca69f94dd0764e0c6065c0f0be7b72499686cc507b19829ad2d7b81 WatchSource:0}: Error finding container 327e66feeca69f94dd0764e0c6065c0f0be7b72499686cc507b19829ad2d7b81: Status 404 returned error can't find the container with id 327e66feeca69f94dd0764e0c6065c0f0be7b72499686cc507b19829ad2d7b81 Feb 16 13:54:48 crc kubenswrapper[4812]: I0216 13:54:48.250299 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:54:48 crc kubenswrapper[4812]: W0216 13:54:48.289660 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d0e0e26_2608_436e_847d_d4bee61b1d85.slice/crio-3669678055f51e5a5b4fc33edb09b6d0f5a635e7a5e68032f46d3a7ebdb46b85 WatchSource:0}: Error finding container 3669678055f51e5a5b4fc33edb09b6d0f5a635e7a5e68032f46d3a7ebdb46b85: Status 404 returned error can't find the container with id 3669678055f51e5a5b4fc33edb09b6d0f5a635e7a5e68032f46d3a7ebdb46b85 Feb 16 13:54:48 crc kubenswrapper[4812]: I0216 13:54:48.496430 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-4ww9m"] Feb 16 13:54:48 crc kubenswrapper[4812]: I0216 13:54:48.565760 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7d0e0e26-2608-436e-847d-d4bee61b1d85","Type":"ContainerStarted","Data":"3669678055f51e5a5b4fc33edb09b6d0f5a635e7a5e68032f46d3a7ebdb46b85"} Feb 16 13:54:48 crc kubenswrapper[4812]: I0216 13:54:48.585974 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hcgfc" event={"ID":"2b502458-ea63-4fa7-80b5-5812a46900f4","Type":"ContainerStarted","Data":"915c0bff5b0f180289e5712e4550fbca30c9ec4d16c75d57902f289f8843fe63"} Feb 16 13:54:48 crc kubenswrapper[4812]: I0216 13:54:48.624093 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" event={"ID":"3c55e49a-a30d-4950-a690-c33d9f8a31e0","Type":"ContainerStarted","Data":"fe1a8ada12fd81917eb3f115f0ed57838961e98daadca433ae78ae9036026fef"} Feb 16 13:54:48 crc kubenswrapper[4812]: I0216 13:54:48.636245 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-d76qk" event={"ID":"b3e61e08-7ed1-43ed-a137-910b10e85e36","Type":"ContainerStarted","Data":"8f5f581deb7240f85d1842eb1a42809ae5c341b80f7f652267f30ad19f9e2253"} Feb 16 13:54:48 crc kubenswrapper[4812]: I0216 13:54:48.719123 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-c4st6" event={"ID":"0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c","Type":"ContainerStarted","Data":"327e66feeca69f94dd0764e0c6065c0f0be7b72499686cc507b19829ad2d7b81"} Feb 16 13:54:48 crc kubenswrapper[4812]: I0216 13:54:48.758692 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-hcgfc" podStartSLOduration=34.758661059 podStartE2EDuration="34.758661059s" podCreationTimestamp="2026-02-16 13:54:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:54:48.64205974 +0000 UTC m=+1377.706390441" watchObservedRunningTime="2026-02-16 13:54:48.758661059 +0000 UTC m=+1377.822991760" Feb 16 13:54:48 crc kubenswrapper[4812]: I0216 13:54:48.765725 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e02a9868-e12c-4a65-9ba5-4a5965131b5b","Type":"ContainerStarted","Data":"beee11999f6adab1ee476e781ae4b7dc4146ad26457b4cf73d6bdf2adf0069fe"} Feb 16 13:54:48 crc kubenswrapper[4812]: I0216 13:54:48.806575 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-86c4db556-7x7cc"] Feb 16 13:54:48 crc kubenswrapper[4812]: I0216 13:54:48.830428 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-869988d995-2jcsq"] Feb 16 13:54:48 crc kubenswrapper[4812]: I0216 13:54:48.832787 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-869988d995-2jcsq" Feb 16 13:54:48 crc kubenswrapper[4812]: I0216 13:54:48.842539 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 16 13:54:48 crc kubenswrapper[4812]: I0216 13:54:48.842816 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 16 13:54:48 crc kubenswrapper[4812]: I0216 13:54:48.844701 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-869988d995-2jcsq"] Feb 16 13:54:48 crc kubenswrapper[4812]: I0216 13:54:48.845527 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-d76qk" podStartSLOduration=8.674775791 podStartE2EDuration="48.845504893s" podCreationTimestamp="2026-02-16 13:54:00 +0000 UTC" firstStartedPulling="2026-02-16 13:54:03.097487748 +0000 UTC m=+1332.161818449" lastFinishedPulling="2026-02-16 13:54:43.26821685 +0000 UTC m=+1372.332547551" observedRunningTime="2026-02-16 13:54:48.749944556 +0000 UTC m=+1377.814275257" watchObservedRunningTime="2026-02-16 13:54:48.845504893 +0000 UTC m=+1377.909835594" Feb 16 13:54:48 crc kubenswrapper[4812]: I0216 13:54:48.891011 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=33.952219623 podStartE2EDuration="2m40.890988415s" podCreationTimestamp="2026-02-16 13:52:08 +0000 UTC" firstStartedPulling="2026-02-16 13:52:40.156972412 +0000 UTC m=+1249.221303113" lastFinishedPulling="2026-02-16 13:54:47.095741204 +0000 UTC m=+1376.160071905" observedRunningTime="2026-02-16 13:54:48.841787875 +0000 UTC m=+1377.906118586" watchObservedRunningTime="2026-02-16 13:54:48.890988415 +0000 UTC m=+1377.955319116" Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.060522 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ad498b5-0999-4dc6-984f-154bd501f036-public-tls-certs\") pod \"neutron-869988d995-2jcsq\" (UID: \"1ad498b5-0999-4dc6-984f-154bd501f036\") " pod="openstack/neutron-869988d995-2jcsq" Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.060689 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ad498b5-0999-4dc6-984f-154bd501f036-internal-tls-certs\") pod \"neutron-869988d995-2jcsq\" (UID: \"1ad498b5-0999-4dc6-984f-154bd501f036\") " pod="openstack/neutron-869988d995-2jcsq" Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.060803 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pzll\" (UniqueName: \"kubernetes.io/projected/1ad498b5-0999-4dc6-984f-154bd501f036-kube-api-access-4pzll\") pod \"neutron-869988d995-2jcsq\" (UID: \"1ad498b5-0999-4dc6-984f-154bd501f036\") " pod="openstack/neutron-869988d995-2jcsq" Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.061322 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ad498b5-0999-4dc6-984f-154bd501f036-combined-ca-bundle\") pod \"neutron-869988d995-2jcsq\" (UID: \"1ad498b5-0999-4dc6-984f-154bd501f036\") " pod="openstack/neutron-869988d995-2jcsq" Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.061374 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1ad498b5-0999-4dc6-984f-154bd501f036-config\") pod \"neutron-869988d995-2jcsq\" (UID: \"1ad498b5-0999-4dc6-984f-154bd501f036\") " pod="openstack/neutron-869988d995-2jcsq" Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.061572 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1ad498b5-0999-4dc6-984f-154bd501f036-httpd-config\") pod \"neutron-869988d995-2jcsq\" (UID: \"1ad498b5-0999-4dc6-984f-154bd501f036\") " pod="openstack/neutron-869988d995-2jcsq" Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.061663 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ad498b5-0999-4dc6-984f-154bd501f036-ovndb-tls-certs\") pod \"neutron-869988d995-2jcsq\" (UID: \"1ad498b5-0999-4dc6-984f-154bd501f036\") " pod="openstack/neutron-869988d995-2jcsq" Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.207359 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ad498b5-0999-4dc6-984f-154bd501f036-combined-ca-bundle\") pod \"neutron-869988d995-2jcsq\" (UID: \"1ad498b5-0999-4dc6-984f-154bd501f036\") " pod="openstack/neutron-869988d995-2jcsq" Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.219962 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1ad498b5-0999-4dc6-984f-154bd501f036-config\") pod \"neutron-869988d995-2jcsq\" (UID: \"1ad498b5-0999-4dc6-984f-154bd501f036\") " pod="openstack/neutron-869988d995-2jcsq" Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.220021 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1ad498b5-0999-4dc6-984f-154bd501f036-httpd-config\") pod \"neutron-869988d995-2jcsq\" (UID: \"1ad498b5-0999-4dc6-984f-154bd501f036\") " pod="openstack/neutron-869988d995-2jcsq" Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.220110 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ad498b5-0999-4dc6-984f-154bd501f036-ovndb-tls-certs\") pod \"neutron-869988d995-2jcsq\" (UID: \"1ad498b5-0999-4dc6-984f-154bd501f036\") " pod="openstack/neutron-869988d995-2jcsq" Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.220291 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ad498b5-0999-4dc6-984f-154bd501f036-public-tls-certs\") pod \"neutron-869988d995-2jcsq\" (UID: \"1ad498b5-0999-4dc6-984f-154bd501f036\") " pod="openstack/neutron-869988d995-2jcsq" Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.223224 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ad498b5-0999-4dc6-984f-154bd501f036-internal-tls-certs\") pod \"neutron-869988d995-2jcsq\" (UID: \"1ad498b5-0999-4dc6-984f-154bd501f036\") " pod="openstack/neutron-869988d995-2jcsq" Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.223620 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pzll\" (UniqueName: \"kubernetes.io/projected/1ad498b5-0999-4dc6-984f-154bd501f036-kube-api-access-4pzll\") pod \"neutron-869988d995-2jcsq\" (UID: \"1ad498b5-0999-4dc6-984f-154bd501f036\") " pod="openstack/neutron-869988d995-2jcsq" Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.229328 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ad498b5-0999-4dc6-984f-154bd501f036-public-tls-certs\") pod \"neutron-869988d995-2jcsq\" (UID: \"1ad498b5-0999-4dc6-984f-154bd501f036\") " pod="openstack/neutron-869988d995-2jcsq" Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.230039 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ad498b5-0999-4dc6-984f-154bd501f036-ovndb-tls-certs\") pod \"neutron-869988d995-2jcsq\" (UID: \"1ad498b5-0999-4dc6-984f-154bd501f036\") " pod="openstack/neutron-869988d995-2jcsq" Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.230613 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ad498b5-0999-4dc6-984f-154bd501f036-combined-ca-bundle\") pod \"neutron-869988d995-2jcsq\" (UID: \"1ad498b5-0999-4dc6-984f-154bd501f036\") " pod="openstack/neutron-869988d995-2jcsq" Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.231477 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1ad498b5-0999-4dc6-984f-154bd501f036-httpd-config\") pod \"neutron-869988d995-2jcsq\" (UID: \"1ad498b5-0999-4dc6-984f-154bd501f036\") " pod="openstack/neutron-869988d995-2jcsq" Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.231685 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1ad498b5-0999-4dc6-984f-154bd501f036-config\") pod \"neutron-869988d995-2jcsq\" (UID: \"1ad498b5-0999-4dc6-984f-154bd501f036\") " pod="openstack/neutron-869988d995-2jcsq" Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.258051 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ad498b5-0999-4dc6-984f-154bd501f036-internal-tls-certs\") pod \"neutron-869988d995-2jcsq\" (UID: \"1ad498b5-0999-4dc6-984f-154bd501f036\") " pod="openstack/neutron-869988d995-2jcsq" Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.270394 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pzll\" (UniqueName: \"kubernetes.io/projected/1ad498b5-0999-4dc6-984f-154bd501f036-kube-api-access-4pzll\") pod \"neutron-869988d995-2jcsq\" (UID: \"1ad498b5-0999-4dc6-984f-154bd501f036\") " pod="openstack/neutron-869988d995-2jcsq" Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.498193 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-869988d995-2jcsq" Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.631596 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.655952 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-c4st6" Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.741086 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c-dns-swift-storage-0\") pod \"0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c\" (UID: \"0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c\") " Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.741592 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c-ovsdbserver-sb\") pod \"0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c\" (UID: \"0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c\") " Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.741719 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpxt2\" (UniqueName: \"kubernetes.io/projected/0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c-kube-api-access-mpxt2\") pod \"0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c\" (UID: \"0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c\") " Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.741828 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c-ovsdbserver-nb\") pod \"0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c\" (UID: \"0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c\") " Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.741924 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c-dns-svc\") pod \"0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c\" (UID: \"0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c\") " Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.742094 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c-config\") pod \"0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c\" (UID: \"0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c\") " Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.755398 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c-kube-api-access-mpxt2" (OuterVolumeSpecName: "kube-api-access-mpxt2") pod "0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c" (UID: "0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c"). InnerVolumeSpecName "kube-api-access-mpxt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.821477 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c" (UID: "0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.822955 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c-config" (OuterVolumeSpecName: "config") pod "0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c" (UID: "0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.826112 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c" (UID: "0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.838741 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c" (UID: "0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.839194 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c" (UID: "0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.845287 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpxt2\" (UniqueName: \"kubernetes.io/projected/0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c-kube-api-access-mpxt2\") on node \"crc\" DevicePath \"\"" Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.845331 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.845343 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.845353 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.845363 4812 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.845374 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.872471 4812 generic.go:334] "Generic (PLEG): container finished" podID="1492db35-d6ea-4d34-b29a-6d5537694379" containerID="5a37034d0ae91cb0324bc6faf92a48d37b17b881d4cb75fd178bd25d180e1fc8" exitCode=0 Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.872619 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-4ww9m" event={"ID":"1492db35-d6ea-4d34-b29a-6d5537694379","Type":"ContainerDied","Data":"5a37034d0ae91cb0324bc6faf92a48d37b17b881d4cb75fd178bd25d180e1fc8"} Feb 16 13:54:49 crc kubenswrapper[4812]: I0216 13:54:49.872674 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-4ww9m" event={"ID":"1492db35-d6ea-4d34-b29a-6d5537694379","Type":"ContainerStarted","Data":"a1c57c40f93cbd35027dc114e79003f8b83eac8329e5da689399e995b21d7e9c"} Feb 16 13:54:50 crc kubenswrapper[4812]: I0216 13:54:50.102855 4812 generic.go:334] "Generic (PLEG): container finished" podID="0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c" containerID="b9e88774feda7d7d9002bbb60236cba3d2f06f33c93cc0873a71fc16bb730d8b" exitCode=0 Feb 16 13:54:50 crc kubenswrapper[4812]: I0216 13:54:50.102930 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-c4st6" event={"ID":"0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c","Type":"ContainerDied","Data":"327e66feeca69f94dd0764e0c6065c0f0be7b72499686cc507b19829ad2d7b81"} Feb 16 13:54:50 crc kubenswrapper[4812]: I0216 13:54:50.102964 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-c4st6" event={"ID":"0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c","Type":"ContainerDied","Data":"b9e88774feda7d7d9002bbb60236cba3d2f06f33c93cc0873a71fc16bb730d8b"} Feb 16 13:54:50 crc kubenswrapper[4812]: I0216 13:54:50.102984 4812 scope.go:117] "RemoveContainer" containerID="b9e88774feda7d7d9002bbb60236cba3d2f06f33c93cc0873a71fc16bb730d8b" Feb 16 13:54:50 crc kubenswrapper[4812]: I0216 13:54:50.103144 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-c4st6" Feb 16 13:54:50 crc kubenswrapper[4812]: I0216 13:54:50.151541 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e2ac91dc-9185-46fe-9583-3355cb2be045","Type":"ContainerStarted","Data":"7ea0ad73137a07229d805e2f50fd66ed9261695bf7c16a0436f1f62d65afaba9"} Feb 16 13:54:50 crc kubenswrapper[4812]: I0216 13:54:50.171033 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86c4db556-7x7cc" event={"ID":"735893be-02d4-49a0-af55-787ea0f940cb","Type":"ContainerStarted","Data":"6b8ce315f7c192cde51e62a7726d33382abf2bfe0aa63c81508e58d9af332537"} Feb 16 13:54:50 crc kubenswrapper[4812]: I0216 13:54:50.171207 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86c4db556-7x7cc" event={"ID":"735893be-02d4-49a0-af55-787ea0f940cb","Type":"ContainerStarted","Data":"b4468d66818aa25aa70ac49a22b8682b7d561a74c87f88a13a437d9f3245bd32"} Feb 16 13:54:50 crc kubenswrapper[4812]: I0216 13:54:50.401571 4812 scope.go:117] "RemoveContainer" containerID="b9e88774feda7d7d9002bbb60236cba3d2f06f33c93cc0873a71fc16bb730d8b" Feb 16 13:54:50 crc kubenswrapper[4812]: E0216 13:54:50.405596 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9e88774feda7d7d9002bbb60236cba3d2f06f33c93cc0873a71fc16bb730d8b\": container with ID starting with b9e88774feda7d7d9002bbb60236cba3d2f06f33c93cc0873a71fc16bb730d8b not found: ID does not exist" containerID="b9e88774feda7d7d9002bbb60236cba3d2f06f33c93cc0873a71fc16bb730d8b" Feb 16 13:54:50 crc kubenswrapper[4812]: I0216 13:54:50.405649 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9e88774feda7d7d9002bbb60236cba3d2f06f33c93cc0873a71fc16bb730d8b"} err="failed to get container status \"b9e88774feda7d7d9002bbb60236cba3d2f06f33c93cc0873a71fc16bb730d8b\": rpc error: code = NotFound desc = could not find container \"b9e88774feda7d7d9002bbb60236cba3d2f06f33c93cc0873a71fc16bb730d8b\": container with ID starting with b9e88774feda7d7d9002bbb60236cba3d2f06f33c93cc0873a71fc16bb730d8b not found: ID does not exist" Feb 16 13:54:50 crc kubenswrapper[4812]: I0216 13:54:50.589025 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-c4st6"] Feb 16 13:54:50 crc kubenswrapper[4812]: I0216 13:54:50.606978 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-c4st6"] Feb 16 13:54:50 crc kubenswrapper[4812]: I0216 13:54:50.769233 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-869988d995-2jcsq"] Feb 16 13:54:51 crc kubenswrapper[4812]: I0216 13:54:51.718972 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7d0e0e26-2608-436e-847d-d4bee61b1d85","Type":"ContainerStarted","Data":"a4b8033b28140c65c05eb6312b41cbd9e352de0d60ecf68e24e8620e6ba4c6a9"} Feb 16 13:54:51 crc kubenswrapper[4812]: I0216 13:54:51.777143 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-4ww9m" event={"ID":"1492db35-d6ea-4d34-b29a-6d5537694379","Type":"ContainerStarted","Data":"af14d5bffce96d7b25e41a622cb10a5e6fc0c537475ff4731e5e80f15fc12bd1"} Feb 16 13:54:51 crc kubenswrapper[4812]: I0216 13:54:51.784407 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-4ww9m" Feb 16 13:54:51 crc kubenswrapper[4812]: I0216 13:54:51.870772 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-4ww9m" podStartSLOduration=7.870732637 podStartE2EDuration="7.870732637s" podCreationTimestamp="2026-02-16 13:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:54:51.829970602 +0000 UTC m=+1380.894301323" watchObservedRunningTime="2026-02-16 13:54:51.870732637 +0000 UTC m=+1380.935063338" Feb 16 13:54:51 crc kubenswrapper[4812]: I0216 13:54:51.924107 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c" path="/var/lib/kubelet/pods/0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c/volumes" Feb 16 13:54:52 crc kubenswrapper[4812]: I0216 13:54:52.874631 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e2ac91dc-9185-46fe-9583-3355cb2be045","Type":"ContainerStarted","Data":"c0ab0ed613811f60e30fe0f01333b738d8807375c05894ada81d91826028ce30"} Feb 16 13:54:52 crc kubenswrapper[4812]: I0216 13:54:52.912493 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-869988d995-2jcsq" event={"ID":"1ad498b5-0999-4dc6-984f-154bd501f036","Type":"ContainerStarted","Data":"b5194a351de0a3c2a69daa51d8f4faa9ed51ce45912a021a5e907e911f3ece08"} Feb 16 13:54:52 crc kubenswrapper[4812]: I0216 13:54:52.913231 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-869988d995-2jcsq" event={"ID":"1ad498b5-0999-4dc6-984f-154bd501f036","Type":"ContainerStarted","Data":"76f4a3a982cfc68ea557c336c02507ac0c1c08b38c17d8a0aa2b9239a8ee758b"} Feb 16 13:54:52 crc kubenswrapper[4812]: I0216 13:54:52.941972 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86c4db556-7x7cc" event={"ID":"735893be-02d4-49a0-af55-787ea0f940cb","Type":"ContainerStarted","Data":"f714ab7e99824f80a0244828f5d93b6625f0548c7fe3b9e53c455da66a0a13c9"} Feb 16 13:54:52 crc kubenswrapper[4812]: I0216 13:54:52.942110 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-86c4db556-7x7cc" Feb 16 13:54:53 crc kubenswrapper[4812]: I0216 13:54:53.005407 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-86c4db556-7x7cc" podStartSLOduration=8.005368607 podStartE2EDuration="8.005368607s" podCreationTimestamp="2026-02-16 13:54:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:54:52.98380113 +0000 UTC m=+1382.048131841" watchObservedRunningTime="2026-02-16 13:54:53.005368607 +0000 UTC m=+1382.069699308" Feb 16 13:54:53 crc kubenswrapper[4812]: E0216 13:54:53.892090 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 13:54:54 crc kubenswrapper[4812]: I0216 13:54:54.053491 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-869988d995-2jcsq" event={"ID":"1ad498b5-0999-4dc6-984f-154bd501f036","Type":"ContainerStarted","Data":"20c4b07745e3cd34844459e63d865503acf1c346a1e17adeaf4dfba5b05a6b3c"} Feb 16 13:54:54 crc kubenswrapper[4812]: I0216 13:54:54.055793 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-869988d995-2jcsq" Feb 16 13:54:54 crc kubenswrapper[4812]: I0216 13:54:54.078601 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7d0e0e26-2608-436e-847d-d4bee61b1d85" containerName="glance-log" containerID="cri-o://a4b8033b28140c65c05eb6312b41cbd9e352de0d60ecf68e24e8620e6ba4c6a9" gracePeriod=30 Feb 16 13:54:54 crc kubenswrapper[4812]: I0216 13:54:54.078846 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7d0e0e26-2608-436e-847d-d4bee61b1d85" containerName="glance-httpd" containerID="cri-o://dbc94bc36e10a7e5575ff4b9e2e37970b15f5c898051aba63bde1fc8308197df" gracePeriod=30 Feb 16 13:54:54 crc kubenswrapper[4812]: I0216 13:54:54.079276 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7d0e0e26-2608-436e-847d-d4bee61b1d85","Type":"ContainerStarted","Data":"dbc94bc36e10a7e5575ff4b9e2e37970b15f5c898051aba63bde1fc8308197df"} Feb 16 13:54:54 crc kubenswrapper[4812]: I0216 13:54:54.120018 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-869988d995-2jcsq" podStartSLOduration=6.119966195 podStartE2EDuration="6.119966195s" podCreationTimestamp="2026-02-16 13:54:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:54:54.104484185 +0000 UTC m=+1383.168814896" watchObservedRunningTime="2026-02-16 13:54:54.119966195 +0000 UTC m=+1383.184296896" Feb 16 13:54:54 crc kubenswrapper[4812]: I0216 13:54:54.152608 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=39.152573353 podStartE2EDuration="39.152573353s" podCreationTimestamp="2026-02-16 13:54:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:54:54.139125802 +0000 UTC m=+1383.203456513" watchObservedRunningTime="2026-02-16 13:54:54.152573353 +0000 UTC m=+1383.216904054" Feb 16 13:54:54 crc kubenswrapper[4812]: I0216 13:54:54.665546 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 13:54:54 crc kubenswrapper[4812]: I0216 13:54:54.666645 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="e02a9868-e12c-4a65-9ba5-4a5965131b5b" containerName="prometheus" containerID="cri-o://b3c8d79bb1d51b82d94578928b328f7fde590b268cc014d9eda7fcd30ce8654f" gracePeriod=600 Feb 16 13:54:54 crc kubenswrapper[4812]: I0216 13:54:54.667273 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="e02a9868-e12c-4a65-9ba5-4a5965131b5b" containerName="config-reloader" containerID="cri-o://c2421c6935d64b00dae4f5f2e2ad4de12d675fde6a818677e5b84fa6f212904b" gracePeriod=600 Feb 16 13:54:54 crc kubenswrapper[4812]: I0216 13:54:54.667346 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="e02a9868-e12c-4a65-9ba5-4a5965131b5b" containerName="thanos-sidecar" containerID="cri-o://beee11999f6adab1ee476e781ae4b7dc4146ad26457b4cf73d6bdf2adf0069fe" gracePeriod=600 Feb 16 13:54:54 crc kubenswrapper[4812]: E0216 13:54:54.931535 4812 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode02a9868_e12c_4a65_9ba5_4a5965131b5b.slice/crio-beee11999f6adab1ee476e781ae4b7dc4146ad26457b4cf73d6bdf2adf0069fe.scope\": RecentStats: unable to find data in memory cache]" Feb 16 13:54:55 crc kubenswrapper[4812]: I0216 13:54:55.097292 4812 generic.go:334] "Generic (PLEG): container finished" podID="7d0e0e26-2608-436e-847d-d4bee61b1d85" containerID="a4b8033b28140c65c05eb6312b41cbd9e352de0d60ecf68e24e8620e6ba4c6a9" exitCode=143 Feb 16 13:54:55 crc kubenswrapper[4812]: I0216 13:54:55.099318 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7d0e0e26-2608-436e-847d-d4bee61b1d85","Type":"ContainerDied","Data":"a4b8033b28140c65c05eb6312b41cbd9e352de0d60ecf68e24e8620e6ba4c6a9"} Feb 16 13:54:55 crc kubenswrapper[4812]: I0216 13:54:55.232887 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="e02a9868-e12c-4a65-9ba5-4a5965131b5b" containerName="prometheus" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 13:54:56 crc kubenswrapper[4812]: I0216 13:54:56.135337 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e2ac91dc-9185-46fe-9583-3355cb2be045","Type":"ContainerStarted","Data":"d0b552259f2f9e2a0aaa44f8d9548d65856b76705abfe6ec9e4fc1b7b1aa744d"} Feb 16 13:54:56 crc kubenswrapper[4812]: I0216 13:54:56.181264 4812 generic.go:334] "Generic (PLEG): container finished" podID="e02a9868-e12c-4a65-9ba5-4a5965131b5b" containerID="beee11999f6adab1ee476e781ae4b7dc4146ad26457b4cf73d6bdf2adf0069fe" exitCode=0 Feb 16 13:54:56 crc kubenswrapper[4812]: I0216 13:54:56.181323 4812 generic.go:334] "Generic (PLEG): container finished" podID="e02a9868-e12c-4a65-9ba5-4a5965131b5b" containerID="c2421c6935d64b00dae4f5f2e2ad4de12d675fde6a818677e5b84fa6f212904b" exitCode=0 Feb 16 13:54:56 crc kubenswrapper[4812]: I0216 13:54:56.181335 4812 generic.go:334] "Generic (PLEG): container finished" podID="e02a9868-e12c-4a65-9ba5-4a5965131b5b" containerID="b3c8d79bb1d51b82d94578928b328f7fde590b268cc014d9eda7fcd30ce8654f" exitCode=0 Feb 16 13:54:56 crc kubenswrapper[4812]: I0216 13:54:56.181410 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e02a9868-e12c-4a65-9ba5-4a5965131b5b","Type":"ContainerDied","Data":"beee11999f6adab1ee476e781ae4b7dc4146ad26457b4cf73d6bdf2adf0069fe"} Feb 16 13:54:56 crc kubenswrapper[4812]: I0216 13:54:56.181488 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e02a9868-e12c-4a65-9ba5-4a5965131b5b","Type":"ContainerDied","Data":"c2421c6935d64b00dae4f5f2e2ad4de12d675fde6a818677e5b84fa6f212904b"} Feb 16 13:54:56 crc kubenswrapper[4812]: I0216 13:54:56.181509 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e02a9868-e12c-4a65-9ba5-4a5965131b5b","Type":"ContainerDied","Data":"b3c8d79bb1d51b82d94578928b328f7fde590b268cc014d9eda7fcd30ce8654f"} Feb 16 13:54:56 crc kubenswrapper[4812]: I0216 13:54:56.191803 4812 generic.go:334] "Generic (PLEG): container finished" podID="7d0e0e26-2608-436e-847d-d4bee61b1d85" containerID="dbc94bc36e10a7e5575ff4b9e2e37970b15f5c898051aba63bde1fc8308197df" exitCode=0 Feb 16 13:54:56 crc kubenswrapper[4812]: I0216 13:54:56.191877 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7d0e0e26-2608-436e-847d-d4bee61b1d85","Type":"ContainerDied","Data":"dbc94bc36e10a7e5575ff4b9e2e37970b15f5c898051aba63bde1fc8308197df"} Feb 16 13:54:57 crc kubenswrapper[4812]: I0216 13:54:57.225862 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e2ac91dc-9185-46fe-9583-3355cb2be045" containerName="glance-log" containerID="cri-o://c0ab0ed613811f60e30fe0f01333b738d8807375c05894ada81d91826028ce30" gracePeriod=30 Feb 16 13:54:57 crc kubenswrapper[4812]: I0216 13:54:57.227361 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e2ac91dc-9185-46fe-9583-3355cb2be045" containerName="glance-httpd" containerID="cri-o://d0b552259f2f9e2a0aaa44f8d9548d65856b76705abfe6ec9e4fc1b7b1aa744d" gracePeriod=30 Feb 16 13:54:57 crc kubenswrapper[4812]: I0216 13:54:57.279128 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=43.279101991 podStartE2EDuration="43.279101991s" podCreationTimestamp="2026-02-16 13:54:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:54:57.278661298 +0000 UTC m=+1386.342992019" watchObservedRunningTime="2026-02-16 13:54:57.279101991 +0000 UTC m=+1386.343432692" Feb 16 13:54:58 crc kubenswrapper[4812]: I0216 13:54:58.250101 4812 generic.go:334] "Generic (PLEG): container finished" podID="e2ac91dc-9185-46fe-9583-3355cb2be045" containerID="d0b552259f2f9e2a0aaa44f8d9548d65856b76705abfe6ec9e4fc1b7b1aa744d" exitCode=0 Feb 16 13:54:58 crc kubenswrapper[4812]: I0216 13:54:58.250628 4812 generic.go:334] "Generic (PLEG): container finished" podID="e2ac91dc-9185-46fe-9583-3355cb2be045" containerID="c0ab0ed613811f60e30fe0f01333b738d8807375c05894ada81d91826028ce30" exitCode=143 Feb 16 13:54:58 crc kubenswrapper[4812]: I0216 13:54:58.250410 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e2ac91dc-9185-46fe-9583-3355cb2be045","Type":"ContainerDied","Data":"d0b552259f2f9e2a0aaa44f8d9548d65856b76705abfe6ec9e4fc1b7b1aa744d"} Feb 16 13:54:58 crc kubenswrapper[4812]: I0216 13:54:58.250750 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e2ac91dc-9185-46fe-9583-3355cb2be045","Type":"ContainerDied","Data":"c0ab0ed613811f60e30fe0f01333b738d8807375c05894ada81d91826028ce30"} Feb 16 13:54:58 crc kubenswrapper[4812]: I0216 13:54:58.255557 4812 generic.go:334] "Generic (PLEG): container finished" podID="b3e61e08-7ed1-43ed-a137-910b10e85e36" containerID="8f5f581deb7240f85d1842eb1a42809ae5c341b80f7f652267f30ad19f9e2253" exitCode=0 Feb 16 13:54:58 crc kubenswrapper[4812]: I0216 13:54:58.255629 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-d76qk" event={"ID":"b3e61e08-7ed1-43ed-a137-910b10e85e36","Type":"ContainerDied","Data":"8f5f581deb7240f85d1842eb1a42809ae5c341b80f7f652267f30ad19f9e2253"} Feb 16 13:54:59 crc kubenswrapper[4812]: I0216 13:54:59.340125 4812 generic.go:334] "Generic (PLEG): container finished" podID="2b502458-ea63-4fa7-80b5-5812a46900f4" containerID="915c0bff5b0f180289e5712e4550fbca30c9ec4d16c75d57902f289f8843fe63" exitCode=0 Feb 16 13:54:59 crc kubenswrapper[4812]: I0216 13:54:59.340240 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hcgfc" event={"ID":"2b502458-ea63-4fa7-80b5-5812a46900f4","Type":"ContainerDied","Data":"915c0bff5b0f180289e5712e4550fbca30c9ec4d16c75d57902f289f8843fe63"} Feb 16 13:55:00 crc kubenswrapper[4812]: I0216 13:55:00.455666 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-4ww9m" Feb 16 13:55:00 crc kubenswrapper[4812]: I0216 13:55:00.585518 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-jbrfm"] Feb 16 13:55:00 crc kubenswrapper[4812]: I0216 13:55:00.585941 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-jbrfm" podUID="1eb07864-3ace-404d-b092-271e2a57e677" containerName="dnsmasq-dns" containerID="cri-o://17efbac5d5e1ebaf817d9c9a8fe12168b35af20f11dd517db48a028d31271a3a" gracePeriod=10 Feb 16 13:55:01 crc kubenswrapper[4812]: I0216 13:55:01.483904 4812 generic.go:334] "Generic (PLEG): container finished" podID="1eb07864-3ace-404d-b092-271e2a57e677" containerID="17efbac5d5e1ebaf817d9c9a8fe12168b35af20f11dd517db48a028d31271a3a" exitCode=0 Feb 16 13:55:01 crc kubenswrapper[4812]: I0216 13:55:01.484477 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-jbrfm" event={"ID":"1eb07864-3ace-404d-b092-271e2a57e677","Type":"ContainerDied","Data":"17efbac5d5e1ebaf817d9c9a8fe12168b35af20f11dd517db48a028d31271a3a"} Feb 16 13:55:01 crc kubenswrapper[4812]: I0216 13:55:01.979948 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s95g8"] Feb 16 13:55:01 crc kubenswrapper[4812]: E0216 13:55:01.981416 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c" containerName="init" Feb 16 13:55:01 crc kubenswrapper[4812]: I0216 13:55:01.981467 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c" containerName="init" Feb 16 13:55:01 crc kubenswrapper[4812]: I0216 13:55:01.981975 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="0848f6ce-f0a3-4c2c-8fa5-9c763aacb68c" containerName="init" Feb 16 13:55:01 crc kubenswrapper[4812]: I0216 13:55:01.985409 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s95g8" Feb 16 13:55:02 crc kubenswrapper[4812]: I0216 13:55:02.034183 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s95g8"] Feb 16 13:55:02 crc kubenswrapper[4812]: I0216 13:55:02.053905 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4a72604-ad70-4ca7-97fc-582483d19fd1-utilities\") pod \"redhat-marketplace-s95g8\" (UID: \"b4a72604-ad70-4ca7-97fc-582483d19fd1\") " pod="openshift-marketplace/redhat-marketplace-s95g8" Feb 16 13:55:02 crc kubenswrapper[4812]: I0216 13:55:02.054719 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4a72604-ad70-4ca7-97fc-582483d19fd1-catalog-content\") pod \"redhat-marketplace-s95g8\" (UID: \"b4a72604-ad70-4ca7-97fc-582483d19fd1\") " pod="openshift-marketplace/redhat-marketplace-s95g8" Feb 16 13:55:02 crc kubenswrapper[4812]: I0216 13:55:02.054937 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58mxl\" (UniqueName: \"kubernetes.io/projected/b4a72604-ad70-4ca7-97fc-582483d19fd1-kube-api-access-58mxl\") pod \"redhat-marketplace-s95g8\" (UID: \"b4a72604-ad70-4ca7-97fc-582483d19fd1\") " pod="openshift-marketplace/redhat-marketplace-s95g8" Feb 16 13:55:02 crc kubenswrapper[4812]: I0216 13:55:02.158184 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58mxl\" (UniqueName: \"kubernetes.io/projected/b4a72604-ad70-4ca7-97fc-582483d19fd1-kube-api-access-58mxl\") pod \"redhat-marketplace-s95g8\" (UID: \"b4a72604-ad70-4ca7-97fc-582483d19fd1\") " pod="openshift-marketplace/redhat-marketplace-s95g8" Feb 16 13:55:02 crc kubenswrapper[4812]: I0216 13:55:02.159115 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4a72604-ad70-4ca7-97fc-582483d19fd1-utilities\") pod \"redhat-marketplace-s95g8\" (UID: \"b4a72604-ad70-4ca7-97fc-582483d19fd1\") " pod="openshift-marketplace/redhat-marketplace-s95g8" Feb 16 13:55:02 crc kubenswrapper[4812]: I0216 13:55:02.159804 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4a72604-ad70-4ca7-97fc-582483d19fd1-utilities\") pod \"redhat-marketplace-s95g8\" (UID: \"b4a72604-ad70-4ca7-97fc-582483d19fd1\") " pod="openshift-marketplace/redhat-marketplace-s95g8" Feb 16 13:55:02 crc kubenswrapper[4812]: I0216 13:55:02.160112 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4a72604-ad70-4ca7-97fc-582483d19fd1-catalog-content\") pod \"redhat-marketplace-s95g8\" (UID: \"b4a72604-ad70-4ca7-97fc-582483d19fd1\") " pod="openshift-marketplace/redhat-marketplace-s95g8" Feb 16 13:55:02 crc kubenswrapper[4812]: I0216 13:55:02.160462 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4a72604-ad70-4ca7-97fc-582483d19fd1-catalog-content\") pod \"redhat-marketplace-s95g8\" (UID: \"b4a72604-ad70-4ca7-97fc-582483d19fd1\") " pod="openshift-marketplace/redhat-marketplace-s95g8" Feb 16 13:55:02 crc kubenswrapper[4812]: I0216 13:55:02.197514 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58mxl\" (UniqueName: \"kubernetes.io/projected/b4a72604-ad70-4ca7-97fc-582483d19fd1-kube-api-access-58mxl\") pod \"redhat-marketplace-s95g8\" (UID: \"b4a72604-ad70-4ca7-97fc-582483d19fd1\") " pod="openshift-marketplace/redhat-marketplace-s95g8" Feb 16 13:55:02 crc kubenswrapper[4812]: I0216 13:55:02.345198 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s95g8" Feb 16 13:55:03 crc kubenswrapper[4812]: I0216 13:55:03.227180 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="e02a9868-e12c-4a65-9ba5-4a5965131b5b" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.115:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 13:55:03 crc kubenswrapper[4812]: I0216 13:55:03.589166 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-jbrfm" podUID="1eb07864-3ace-404d-b092-271e2a57e677" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.134:5353: connect: connection refused" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.306943 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.330386 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.338732 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-d76qk" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.339915 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hcgfc" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.421590 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b502458-ea63-4fa7-80b5-5812a46900f4-scripts\") pod \"2b502458-ea63-4fa7-80b5-5812a46900f4\" (UID: \"2b502458-ea63-4fa7-80b5-5812a46900f4\") " Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.421780 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2b502458-ea63-4fa7-80b5-5812a46900f4-fernet-keys\") pod \"2b502458-ea63-4fa7-80b5-5812a46900f4\" (UID: \"2b502458-ea63-4fa7-80b5-5812a46900f4\") " Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.421827 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e02a9868-e12c-4a65-9ba5-4a5965131b5b-thanos-prometheus-http-client-file\") pod \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\" (UID: \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\") " Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.421863 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3e61e08-7ed1-43ed-a137-910b10e85e36-scripts\") pod \"b3e61e08-7ed1-43ed-a137-910b10e85e36\" (UID: \"b3e61e08-7ed1-43ed-a137-910b10e85e36\") " Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.421896 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e02a9868-e12c-4a65-9ba5-4a5965131b5b-web-config\") pod \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\" (UID: \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\") " Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.421926 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d0e0e26-2608-436e-847d-d4bee61b1d85-logs\") pod \"7d0e0e26-2608-436e-847d-d4bee61b1d85\" (UID: \"7d0e0e26-2608-436e-847d-d4bee61b1d85\") " Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.421960 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e02a9868-e12c-4a65-9ba5-4a5965131b5b-config\") pod \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\" (UID: \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\") " Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.422034 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3e61e08-7ed1-43ed-a137-910b10e85e36-config-data\") pod \"b3e61e08-7ed1-43ed-a137-910b10e85e36\" (UID: \"b3e61e08-7ed1-43ed-a137-910b10e85e36\") " Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.422076 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b502458-ea63-4fa7-80b5-5812a46900f4-config-data\") pod \"2b502458-ea63-4fa7-80b5-5812a46900f4\" (UID: \"2b502458-ea63-4fa7-80b5-5812a46900f4\") " Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.422518 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-149889e2-65b8-4663-a4ac-a48e48736700\") pod \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\" (UID: \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\") " Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.422570 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b502458-ea63-4fa7-80b5-5812a46900f4-combined-ca-bundle\") pod \"2b502458-ea63-4fa7-80b5-5812a46900f4\" (UID: \"2b502458-ea63-4fa7-80b5-5812a46900f4\") " Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.422639 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e02a9868-e12c-4a65-9ba5-4a5965131b5b-prometheus-metric-storage-rulefiles-0\") pod \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\" (UID: \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\") " Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.422793 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bcbbffc-f82a-4bc1-865f-aace29f12e67\") pod \"7d0e0e26-2608-436e-847d-d4bee61b1d85\" (UID: \"7d0e0e26-2608-436e-847d-d4bee61b1d85\") " Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.422825 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d0e0e26-2608-436e-847d-d4bee61b1d85-combined-ca-bundle\") pod \"7d0e0e26-2608-436e-847d-d4bee61b1d85\" (UID: \"7d0e0e26-2608-436e-847d-d4bee61b1d85\") " Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.422880 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j88n5\" (UniqueName: \"kubernetes.io/projected/2b502458-ea63-4fa7-80b5-5812a46900f4-kube-api-access-j88n5\") pod \"2b502458-ea63-4fa7-80b5-5812a46900f4\" (UID: \"2b502458-ea63-4fa7-80b5-5812a46900f4\") " Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.422985 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e02a9868-e12c-4a65-9ba5-4a5965131b5b-config-out\") pod \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\" (UID: \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\") " Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.423028 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkzm4\" (UniqueName: \"kubernetes.io/projected/e02a9868-e12c-4a65-9ba5-4a5965131b5b-kube-api-access-nkzm4\") pod \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\" (UID: \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\") " Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.423080 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3e61e08-7ed1-43ed-a137-910b10e85e36-combined-ca-bundle\") pod \"b3e61e08-7ed1-43ed-a137-910b10e85e36\" (UID: \"b3e61e08-7ed1-43ed-a137-910b10e85e36\") " Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.423125 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e02a9868-e12c-4a65-9ba5-4a5965131b5b-prometheus-metric-storage-rulefiles-2\") pod \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\" (UID: \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\") " Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.423223 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d0e0e26-2608-436e-847d-d4bee61b1d85-scripts\") pod \"7d0e0e26-2608-436e-847d-d4bee61b1d85\" (UID: \"7d0e0e26-2608-436e-847d-d4bee61b1d85\") " Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.423259 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d0e0e26-2608-436e-847d-d4bee61b1d85-config-data\") pod \"7d0e0e26-2608-436e-847d-d4bee61b1d85\" (UID: \"7d0e0e26-2608-436e-847d-d4bee61b1d85\") " Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.423322 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42pvs\" (UniqueName: \"kubernetes.io/projected/b3e61e08-7ed1-43ed-a137-910b10e85e36-kube-api-access-42pvs\") pod \"b3e61e08-7ed1-43ed-a137-910b10e85e36\" (UID: \"b3e61e08-7ed1-43ed-a137-910b10e85e36\") " Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.423363 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e02a9868-e12c-4a65-9ba5-4a5965131b5b-prometheus-metric-storage-rulefiles-1\") pod \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\" (UID: \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\") " Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.423393 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3e61e08-7ed1-43ed-a137-910b10e85e36-logs\") pod \"b3e61e08-7ed1-43ed-a137-910b10e85e36\" (UID: \"b3e61e08-7ed1-43ed-a137-910b10e85e36\") " Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.423465 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59hbz\" (UniqueName: \"kubernetes.io/projected/7d0e0e26-2608-436e-847d-d4bee61b1d85-kube-api-access-59hbz\") pod \"7d0e0e26-2608-436e-847d-d4bee61b1d85\" (UID: \"7d0e0e26-2608-436e-847d-d4bee61b1d85\") " Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.423492 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7d0e0e26-2608-436e-847d-d4bee61b1d85-httpd-run\") pod \"7d0e0e26-2608-436e-847d-d4bee61b1d85\" (UID: \"7d0e0e26-2608-436e-847d-d4bee61b1d85\") " Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.423528 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e02a9868-e12c-4a65-9ba5-4a5965131b5b-tls-assets\") pod \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\" (UID: \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\") " Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.423566 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2b502458-ea63-4fa7-80b5-5812a46900f4-credential-keys\") pod \"2b502458-ea63-4fa7-80b5-5812a46900f4\" (UID: \"2b502458-ea63-4fa7-80b5-5812a46900f4\") " Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.430204 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3e61e08-7ed1-43ed-a137-910b10e85e36-logs" (OuterVolumeSpecName: "logs") pod "b3e61e08-7ed1-43ed-a137-910b10e85e36" (UID: "b3e61e08-7ed1-43ed-a137-910b10e85e36"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.431987 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d0e0e26-2608-436e-847d-d4bee61b1d85-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7d0e0e26-2608-436e-847d-d4bee61b1d85" (UID: "7d0e0e26-2608-436e-847d-d4bee61b1d85"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.435036 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e02a9868-e12c-4a65-9ba5-4a5965131b5b-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "e02a9868-e12c-4a65-9ba5-4a5965131b5b" (UID: "e02a9868-e12c-4a65-9ba5-4a5965131b5b"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.442016 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e02a9868-e12c-4a65-9ba5-4a5965131b5b-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "e02a9868-e12c-4a65-9ba5-4a5965131b5b" (UID: "e02a9868-e12c-4a65-9ba5-4a5965131b5b"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.442220 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3e61e08-7ed1-43ed-a137-910b10e85e36-scripts" (OuterVolumeSpecName: "scripts") pod "b3e61e08-7ed1-43ed-a137-910b10e85e36" (UID: "b3e61e08-7ed1-43ed-a137-910b10e85e36"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.442405 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e02a9868-e12c-4a65-9ba5-4a5965131b5b-kube-api-access-nkzm4" (OuterVolumeSpecName: "kube-api-access-nkzm4") pod "e02a9868-e12c-4a65-9ba5-4a5965131b5b" (UID: "e02a9868-e12c-4a65-9ba5-4a5965131b5b"). InnerVolumeSpecName "kube-api-access-nkzm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.442519 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e02a9868-e12c-4a65-9ba5-4a5965131b5b-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "e02a9868-e12c-4a65-9ba5-4a5965131b5b" (UID: "e02a9868-e12c-4a65-9ba5-4a5965131b5b"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.445709 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e02a9868-e12c-4a65-9ba5-4a5965131b5b-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "e02a9868-e12c-4a65-9ba5-4a5965131b5b" (UID: "e02a9868-e12c-4a65-9ba5-4a5965131b5b"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.449564 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b502458-ea63-4fa7-80b5-5812a46900f4-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "2b502458-ea63-4fa7-80b5-5812a46900f4" (UID: "2b502458-ea63-4fa7-80b5-5812a46900f4"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.458753 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b502458-ea63-4fa7-80b5-5812a46900f4-scripts" (OuterVolumeSpecName: "scripts") pod "2b502458-ea63-4fa7-80b5-5812a46900f4" (UID: "2b502458-ea63-4fa7-80b5-5812a46900f4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.458795 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b502458-ea63-4fa7-80b5-5812a46900f4-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "2b502458-ea63-4fa7-80b5-5812a46900f4" (UID: "2b502458-ea63-4fa7-80b5-5812a46900f4"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.459273 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d0e0e26-2608-436e-847d-d4bee61b1d85-logs" (OuterVolumeSpecName: "logs") pod "7d0e0e26-2608-436e-847d-d4bee61b1d85" (UID: "7d0e0e26-2608-436e-847d-d4bee61b1d85"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.486210 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e02a9868-e12c-4a65-9ba5-4a5965131b5b-config" (OuterVolumeSpecName: "config") pod "e02a9868-e12c-4a65-9ba5-4a5965131b5b" (UID: "e02a9868-e12c-4a65-9ba5-4a5965131b5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.493989 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e02a9868-e12c-4a65-9ba5-4a5965131b5b-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "e02a9868-e12c-4a65-9ba5-4a5965131b5b" (UID: "e02a9868-e12c-4a65-9ba5-4a5965131b5b"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.494051 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d0e0e26-2608-436e-847d-d4bee61b1d85-scripts" (OuterVolumeSpecName: "scripts") pod "7d0e0e26-2608-436e-847d-d4bee61b1d85" (UID: "7d0e0e26-2608-436e-847d-d4bee61b1d85"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.494112 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3e61e08-7ed1-43ed-a137-910b10e85e36-kube-api-access-42pvs" (OuterVolumeSpecName: "kube-api-access-42pvs") pod "b3e61e08-7ed1-43ed-a137-910b10e85e36" (UID: "b3e61e08-7ed1-43ed-a137-910b10e85e36"). InnerVolumeSpecName "kube-api-access-42pvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.498007 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d0e0e26-2608-436e-847d-d4bee61b1d85-kube-api-access-59hbz" (OuterVolumeSpecName: "kube-api-access-59hbz") pod "7d0e0e26-2608-436e-847d-d4bee61b1d85" (UID: "7d0e0e26-2608-436e-847d-d4bee61b1d85"). InnerVolumeSpecName "kube-api-access-59hbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.505761 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e02a9868-e12c-4a65-9ba5-4a5965131b5b-config-out" (OuterVolumeSpecName: "config-out") pod "e02a9868-e12c-4a65-9ba5-4a5965131b5b" (UID: "e02a9868-e12c-4a65-9ba5-4a5965131b5b"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.515060 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b502458-ea63-4fa7-80b5-5812a46900f4-kube-api-access-j88n5" (OuterVolumeSpecName: "kube-api-access-j88n5") pod "2b502458-ea63-4fa7-80b5-5812a46900f4" (UID: "2b502458-ea63-4fa7-80b5-5812a46900f4"). InnerVolumeSpecName "kube-api-access-j88n5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.531538 4812 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2b502458-ea63-4fa7-80b5-5812a46900f4-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.531572 4812 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e02a9868-e12c-4a65-9ba5-4a5965131b5b-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.531584 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3e61e08-7ed1-43ed-a137-910b10e85e36-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.531592 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d0e0e26-2608-436e-847d-d4bee61b1d85-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.531603 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e02a9868-e12c-4a65-9ba5-4a5965131b5b-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.531613 4812 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e02a9868-e12c-4a65-9ba5-4a5965131b5b-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.531622 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j88n5\" (UniqueName: \"kubernetes.io/projected/2b502458-ea63-4fa7-80b5-5812a46900f4-kube-api-access-j88n5\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.531631 4812 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e02a9868-e12c-4a65-9ba5-4a5965131b5b-config-out\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.531639 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkzm4\" (UniqueName: \"kubernetes.io/projected/e02a9868-e12c-4a65-9ba5-4a5965131b5b-kube-api-access-nkzm4\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.531649 4812 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e02a9868-e12c-4a65-9ba5-4a5965131b5b-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.531657 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d0e0e26-2608-436e-847d-d4bee61b1d85-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.531666 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42pvs\" (UniqueName: \"kubernetes.io/projected/b3e61e08-7ed1-43ed-a137-910b10e85e36-kube-api-access-42pvs\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.531675 4812 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e02a9868-e12c-4a65-9ba5-4a5965131b5b-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.531684 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3e61e08-7ed1-43ed-a137-910b10e85e36-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.531693 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59hbz\" (UniqueName: \"kubernetes.io/projected/7d0e0e26-2608-436e-847d-d4bee61b1d85-kube-api-access-59hbz\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.531702 4812 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7d0e0e26-2608-436e-847d-d4bee61b1d85-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.531710 4812 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e02a9868-e12c-4a65-9ba5-4a5965131b5b-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.531718 4812 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2b502458-ea63-4fa7-80b5-5812a46900f4-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.531728 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b502458-ea63-4fa7-80b5-5812a46900f4-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.543090 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b502458-ea63-4fa7-80b5-5812a46900f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b502458-ea63-4fa7-80b5-5812a46900f4" (UID: "2b502458-ea63-4fa7-80b5-5812a46900f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.552882 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b502458-ea63-4fa7-80b5-5812a46900f4-config-data" (OuterVolumeSpecName: "config-data") pod "2b502458-ea63-4fa7-80b5-5812a46900f4" (UID: "2b502458-ea63-4fa7-80b5-5812a46900f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.563326 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3e61e08-7ed1-43ed-a137-910b10e85e36-config-data" (OuterVolumeSpecName: "config-data") pod "b3e61e08-7ed1-43ed-a137-910b10e85e36" (UID: "b3e61e08-7ed1-43ed-a137-910b10e85e36"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:04 crc kubenswrapper[4812]: E0216 13:55:04.572085 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-149889e2-65b8-4663-a4ac-a48e48736700 podName:e02a9868-e12c-4a65-9ba5-4a5965131b5b nodeName:}" failed. No retries permitted until 2026-02-16 13:55:05.072058153 +0000 UTC m=+1394.136388854 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "prometheus-metric-storage-db" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-149889e2-65b8-4663-a4ac-a48e48736700") pod "e02a9868-e12c-4a65-9ba5-4a5965131b5b" (UID: "e02a9868-e12c-4a65-9ba5-4a5965131b5b") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Unknown desc = check target path: could not get consistent content of /proc/mounts after 3 attempts Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.573972 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bcbbffc-f82a-4bc1-865f-aace29f12e67" (OuterVolumeSpecName: "glance") pod "7d0e0e26-2608-436e-847d-d4bee61b1d85" (UID: "7d0e0e26-2608-436e-847d-d4bee61b1d85"). InnerVolumeSpecName "pvc-2bcbbffc-f82a-4bc1-865f-aace29f12e67". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.604931 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d0e0e26-2608-436e-847d-d4bee61b1d85-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d0e0e26-2608-436e-847d-d4bee61b1d85" (UID: "7d0e0e26-2608-436e-847d-d4bee61b1d85"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.606068 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-jbrfm" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.632840 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3e61e08-7ed1-43ed-a137-910b10e85e36-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3e61e08-7ed1-43ed-a137-910b10e85e36" (UID: "b3e61e08-7ed1-43ed-a137-910b10e85e36"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.634355 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbnwd\" (UniqueName: \"kubernetes.io/projected/1eb07864-3ace-404d-b092-271e2a57e677-kube-api-access-fbnwd\") pod \"1eb07864-3ace-404d-b092-271e2a57e677\" (UID: \"1eb07864-3ace-404d-b092-271e2a57e677\") " Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.634663 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1eb07864-3ace-404d-b092-271e2a57e677-ovsdbserver-nb\") pod \"1eb07864-3ace-404d-b092-271e2a57e677\" (UID: \"1eb07864-3ace-404d-b092-271e2a57e677\") " Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.634790 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3e61e08-7ed1-43ed-a137-910b10e85e36-combined-ca-bundle\") pod \"b3e61e08-7ed1-43ed-a137-910b10e85e36\" (UID: \"b3e61e08-7ed1-43ed-a137-910b10e85e36\") " Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.634986 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1eb07864-3ace-404d-b092-271e2a57e677-dns-svc\") pod \"1eb07864-3ace-404d-b092-271e2a57e677\" (UID: \"1eb07864-3ace-404d-b092-271e2a57e677\") " Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.637922 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1eb07864-3ace-404d-b092-271e2a57e677-config\") pod \"1eb07864-3ace-404d-b092-271e2a57e677\" (UID: \"1eb07864-3ace-404d-b092-271e2a57e677\") " Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.638229 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1eb07864-3ace-404d-b092-271e2a57e677-ovsdbserver-sb\") pod \"1eb07864-3ace-404d-b092-271e2a57e677\" (UID: \"1eb07864-3ace-404d-b092-271e2a57e677\") " Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.639724 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3e61e08-7ed1-43ed-a137-910b10e85e36-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.639855 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b502458-ea63-4fa7-80b5-5812a46900f4-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.639914 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b502458-ea63-4fa7-80b5-5812a46900f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.640005 4812 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2bcbbffc-f82a-4bc1-865f-aace29f12e67\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bcbbffc-f82a-4bc1-865f-aace29f12e67\") on node \"crc\" " Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.640077 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d0e0e26-2608-436e-847d-d4bee61b1d85-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:04 crc kubenswrapper[4812]: W0216 13:55:04.648265 4812 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/b3e61e08-7ed1-43ed-a137-910b10e85e36/volumes/kubernetes.io~secret/combined-ca-bundle Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.648306 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3e61e08-7ed1-43ed-a137-910b10e85e36-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3e61e08-7ed1-43ed-a137-910b10e85e36" (UID: "b3e61e08-7ed1-43ed-a137-910b10e85e36"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.659107 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1eb07864-3ace-404d-b092-271e2a57e677-kube-api-access-fbnwd" (OuterVolumeSpecName: "kube-api-access-fbnwd") pod "1eb07864-3ace-404d-b092-271e2a57e677" (UID: "1eb07864-3ace-404d-b092-271e2a57e677"). InnerVolumeSpecName "kube-api-access-fbnwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.681904 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e02a9868-e12c-4a65-9ba5-4a5965131b5b-web-config" (OuterVolumeSpecName: "web-config") pod "e02a9868-e12c-4a65-9ba5-4a5965131b5b" (UID: "e02a9868-e12c-4a65-9ba5-4a5965131b5b"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.696836 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-jbrfm" event={"ID":"1eb07864-3ace-404d-b092-271e2a57e677","Type":"ContainerDied","Data":"cfcc7d026302b9a65ebc67a6a1c166a162d33f5fbc47b1381efe1aad299b4c42"} Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.696929 4812 scope.go:117] "RemoveContainer" containerID="17efbac5d5e1ebaf817d9c9a8fe12168b35af20f11dd517db48a028d31271a3a" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.697012 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-jbrfm" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.738367 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e02a9868-e12c-4a65-9ba5-4a5965131b5b","Type":"ContainerDied","Data":"e2ad9e3dd430b14f205a07693b722611e0cbc95942123bb2b102dc8086123d7f"} Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.738600 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.743310 4812 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e02a9868-e12c-4a65-9ba5-4a5965131b5b-web-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.743364 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbnwd\" (UniqueName: \"kubernetes.io/projected/1eb07864-3ace-404d-b092-271e2a57e677-kube-api-access-fbnwd\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.743381 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3e61e08-7ed1-43ed-a137-910b10e85e36-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.786264 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7d0e0e26-2608-436e-847d-d4bee61b1d85","Type":"ContainerDied","Data":"3669678055f51e5a5b4fc33edb09b6d0f5a635e7a5e68032f46d3a7ebdb46b85"} Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.788182 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.790894 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hcgfc" event={"ID":"2b502458-ea63-4fa7-80b5-5812a46900f4","Type":"ContainerDied","Data":"0f196114e24aac2913884fab58c865e615ce3a5e4f1b190d73fc51fd284b51e2"} Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.790963 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f196114e24aac2913884fab58c865e615ce3a5e4f1b190d73fc51fd284b51e2" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.791033 4812 scope.go:117] "RemoveContainer" containerID="a476f4e4afb1d565c800ee28bb8344326e7fd55311c92bc69dba2ffd4b724d15" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.791291 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hcgfc" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.809253 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-d76qk" event={"ID":"b3e61e08-7ed1-43ed-a137-910b10e85e36","Type":"ContainerDied","Data":"8c7bfb705356ea1ff433f2fae322fc5b4bae99c61cdf8d9d3212fe10f9cee03b"} Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.809545 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c7bfb705356ea1ff433f2fae322fc5b4bae99c61cdf8d9d3212fe10f9cee03b" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.809831 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-d76qk" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.862204 4812 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.862580 4812 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2bcbbffc-f82a-4bc1-865f-aace29f12e67" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bcbbffc-f82a-4bc1-865f-aace29f12e67") on node "crc" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.886356 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1eb07864-3ace-404d-b092-271e2a57e677-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1eb07864-3ace-404d-b092-271e2a57e677" (UID: "1eb07864-3ace-404d-b092-271e2a57e677"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.904811 4812 reconciler_common.go:293] "Volume detached for volume \"pvc-2bcbbffc-f82a-4bc1-865f-aace29f12e67\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bcbbffc-f82a-4bc1-865f-aace29f12e67\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.907531 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d0e0e26-2608-436e-847d-d4bee61b1d85-config-data" (OuterVolumeSpecName: "config-data") pod "7d0e0e26-2608-436e-847d-d4bee61b1d85" (UID: "7d0e0e26-2608-436e-847d-d4bee61b1d85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:04 crc kubenswrapper[4812]: I0216 13:55:04.952526 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1eb07864-3ace-404d-b092-271e2a57e677-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1eb07864-3ace-404d-b092-271e2a57e677" (UID: "1eb07864-3ace-404d-b092-271e2a57e677"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:04.998062 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1eb07864-3ace-404d-b092-271e2a57e677-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1eb07864-3ace-404d-b092-271e2a57e677" (UID: "1eb07864-3ace-404d-b092-271e2a57e677"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.011854 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1eb07864-3ace-404d-b092-271e2a57e677-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.011889 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1eb07864-3ace-404d-b092-271e2a57e677-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.011905 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d0e0e26-2608-436e-847d-d4bee61b1d85-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.011917 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1eb07864-3ace-404d-b092-271e2a57e677-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.113639 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-149889e2-65b8-4663-a4ac-a48e48736700\") pod \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\" (UID: \"e02a9868-e12c-4a65-9ba5-4a5965131b5b\") " Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.126768 4812 scope.go:117] "RemoveContainer" containerID="beee11999f6adab1ee476e781ae4b7dc4146ad26457b4cf73d6bdf2adf0069fe" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.145803 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1eb07864-3ace-404d-b092-271e2a57e677-config" (OuterVolumeSpecName: "config") pod "1eb07864-3ace-404d-b092-271e2a57e677" (UID: "1eb07864-3ace-404d-b092-271e2a57e677"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.228846 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1eb07864-3ace-404d-b092-271e2a57e677-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.356720 4812 scope.go:117] "RemoveContainer" containerID="c2421c6935d64b00dae4f5f2e2ad4de12d675fde6a818677e5b84fa6f212904b" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.461908 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-149889e2-65b8-4663-a4ac-a48e48736700" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "e02a9868-e12c-4a65-9ba5-4a5965131b5b" (UID: "e02a9868-e12c-4a65-9ba5-4a5965131b5b"). InnerVolumeSpecName "pvc-149889e2-65b8-4663-a4ac-a48e48736700". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.465658 4812 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-149889e2-65b8-4663-a4ac-a48e48736700\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-149889e2-65b8-4663-a4ac-a48e48736700\") on node \"crc\" " Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.494004 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s95g8"] Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.547186 4812 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.547351 4812 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-149889e2-65b8-4663-a4ac-a48e48736700" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-149889e2-65b8-4663-a4ac-a48e48736700") on node "crc" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.570089 4812 reconciler_common.go:293] "Volume detached for volume \"pvc-149889e2-65b8-4663-a4ac-a48e48736700\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-149889e2-65b8-4663-a4ac-a48e48736700\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.636097 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-f7dcf4bcb-h6jf8"] Feb 16 13:55:05 crc kubenswrapper[4812]: E0216 13:55:05.636776 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d0e0e26-2608-436e-847d-d4bee61b1d85" containerName="glance-httpd" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.636796 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d0e0e26-2608-436e-847d-d4bee61b1d85" containerName="glance-httpd" Feb 16 13:55:05 crc kubenswrapper[4812]: E0216 13:55:05.636823 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b502458-ea63-4fa7-80b5-5812a46900f4" containerName="keystone-bootstrap" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.636832 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b502458-ea63-4fa7-80b5-5812a46900f4" containerName="keystone-bootstrap" Feb 16 13:55:05 crc kubenswrapper[4812]: E0216 13:55:05.636842 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eb07864-3ace-404d-b092-271e2a57e677" containerName="init" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.636854 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eb07864-3ace-404d-b092-271e2a57e677" containerName="init" Feb 16 13:55:05 crc kubenswrapper[4812]: E0216 13:55:05.636872 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3e61e08-7ed1-43ed-a137-910b10e85e36" containerName="placement-db-sync" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.636879 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3e61e08-7ed1-43ed-a137-910b10e85e36" containerName="placement-db-sync" Feb 16 13:55:05 crc kubenswrapper[4812]: E0216 13:55:05.636892 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e02a9868-e12c-4a65-9ba5-4a5965131b5b" containerName="init-config-reloader" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.636899 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e02a9868-e12c-4a65-9ba5-4a5965131b5b" containerName="init-config-reloader" Feb 16 13:55:05 crc kubenswrapper[4812]: E0216 13:55:05.636914 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d0e0e26-2608-436e-847d-d4bee61b1d85" containerName="glance-log" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.636921 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d0e0e26-2608-436e-847d-d4bee61b1d85" containerName="glance-log" Feb 16 13:55:05 crc kubenswrapper[4812]: E0216 13:55:05.636939 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e02a9868-e12c-4a65-9ba5-4a5965131b5b" containerName="config-reloader" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.636946 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e02a9868-e12c-4a65-9ba5-4a5965131b5b" containerName="config-reloader" Feb 16 13:55:05 crc kubenswrapper[4812]: E0216 13:55:05.636958 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eb07864-3ace-404d-b092-271e2a57e677" containerName="dnsmasq-dns" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.636966 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eb07864-3ace-404d-b092-271e2a57e677" containerName="dnsmasq-dns" Feb 16 13:55:05 crc kubenswrapper[4812]: E0216 13:55:05.636984 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e02a9868-e12c-4a65-9ba5-4a5965131b5b" containerName="prometheus" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.636994 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e02a9868-e12c-4a65-9ba5-4a5965131b5b" containerName="prometheus" Feb 16 13:55:05 crc kubenswrapper[4812]: E0216 13:55:05.637005 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e02a9868-e12c-4a65-9ba5-4a5965131b5b" containerName="thanos-sidecar" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.637012 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e02a9868-e12c-4a65-9ba5-4a5965131b5b" containerName="thanos-sidecar" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.637192 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b502458-ea63-4fa7-80b5-5812a46900f4" containerName="keystone-bootstrap" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.637205 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3e61e08-7ed1-43ed-a137-910b10e85e36" containerName="placement-db-sync" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.637219 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d0e0e26-2608-436e-847d-d4bee61b1d85" containerName="glance-httpd" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.637229 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1eb07864-3ace-404d-b092-271e2a57e677" containerName="dnsmasq-dns" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.637242 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="e02a9868-e12c-4a65-9ba5-4a5965131b5b" containerName="config-reloader" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.637250 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="e02a9868-e12c-4a65-9ba5-4a5965131b5b" containerName="thanos-sidecar" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.637259 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d0e0e26-2608-436e-847d-d4bee61b1d85" containerName="glance-log" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.637270 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="e02a9868-e12c-4a65-9ba5-4a5965131b5b" containerName="prometheus" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.638157 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f7dcf4bcb-h6jf8" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.643692 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.643707 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.643874 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.644070 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.644220 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.644266 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-vtcqc" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.659825 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5dd887c4d-zfnsh"] Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.662055 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5dd887c4d-zfnsh" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.667430 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.675024 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.675363 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.675555 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-pp6lr" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.676530 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.688217 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-f7dcf4bcb-h6jf8"] Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.705133 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5dd887c4d-zfnsh"] Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.789565 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de3c3908-5942-4fd3-ac7b-6ca838a36198-combined-ca-bundle\") pod \"keystone-f7dcf4bcb-h6jf8\" (UID: \"de3c3908-5942-4fd3-ac7b-6ca838a36198\") " pod="openstack/keystone-f7dcf4bcb-h6jf8" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.789696 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7qnz\" (UniqueName: \"kubernetes.io/projected/de3c3908-5942-4fd3-ac7b-6ca838a36198-kube-api-access-b7qnz\") pod \"keystone-f7dcf4bcb-h6jf8\" (UID: \"de3c3908-5942-4fd3-ac7b-6ca838a36198\") " pod="openstack/keystone-f7dcf4bcb-h6jf8" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.789808 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebf72004-b885-40eb-94ca-bce1652d96c1-combined-ca-bundle\") pod \"placement-5dd887c4d-zfnsh\" (UID: \"ebf72004-b885-40eb-94ca-bce1652d96c1\") " pod="openstack/placement-5dd887c4d-zfnsh" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.789928 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de3c3908-5942-4fd3-ac7b-6ca838a36198-scripts\") pod \"keystone-f7dcf4bcb-h6jf8\" (UID: \"de3c3908-5942-4fd3-ac7b-6ca838a36198\") " pod="openstack/keystone-f7dcf4bcb-h6jf8" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.790010 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hcmv\" (UniqueName: \"kubernetes.io/projected/ebf72004-b885-40eb-94ca-bce1652d96c1-kube-api-access-9hcmv\") pod \"placement-5dd887c4d-zfnsh\" (UID: \"ebf72004-b885-40eb-94ca-bce1652d96c1\") " pod="openstack/placement-5dd887c4d-zfnsh" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.790122 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/de3c3908-5942-4fd3-ac7b-6ca838a36198-credential-keys\") pod \"keystone-f7dcf4bcb-h6jf8\" (UID: \"de3c3908-5942-4fd3-ac7b-6ca838a36198\") " pod="openstack/keystone-f7dcf4bcb-h6jf8" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.790294 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/de3c3908-5942-4fd3-ac7b-6ca838a36198-fernet-keys\") pod \"keystone-f7dcf4bcb-h6jf8\" (UID: \"de3c3908-5942-4fd3-ac7b-6ca838a36198\") " pod="openstack/keystone-f7dcf4bcb-h6jf8" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.790397 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebf72004-b885-40eb-94ca-bce1652d96c1-scripts\") pod \"placement-5dd887c4d-zfnsh\" (UID: \"ebf72004-b885-40eb-94ca-bce1652d96c1\") " pod="openstack/placement-5dd887c4d-zfnsh" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.790580 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebf72004-b885-40eb-94ca-bce1652d96c1-public-tls-certs\") pod \"placement-5dd887c4d-zfnsh\" (UID: \"ebf72004-b885-40eb-94ca-bce1652d96c1\") " pod="openstack/placement-5dd887c4d-zfnsh" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.790681 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebf72004-b885-40eb-94ca-bce1652d96c1-config-data\") pod \"placement-5dd887c4d-zfnsh\" (UID: \"ebf72004-b885-40eb-94ca-bce1652d96c1\") " pod="openstack/placement-5dd887c4d-zfnsh" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.790847 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebf72004-b885-40eb-94ca-bce1652d96c1-logs\") pod \"placement-5dd887c4d-zfnsh\" (UID: \"ebf72004-b885-40eb-94ca-bce1652d96c1\") " pod="openstack/placement-5dd887c4d-zfnsh" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.791042 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de3c3908-5942-4fd3-ac7b-6ca838a36198-config-data\") pod \"keystone-f7dcf4bcb-h6jf8\" (UID: \"de3c3908-5942-4fd3-ac7b-6ca838a36198\") " pod="openstack/keystone-f7dcf4bcb-h6jf8" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.791168 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebf72004-b885-40eb-94ca-bce1652d96c1-internal-tls-certs\") pod \"placement-5dd887c4d-zfnsh\" (UID: \"ebf72004-b885-40eb-94ca-bce1652d96c1\") " pod="openstack/placement-5dd887c4d-zfnsh" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.791277 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/de3c3908-5942-4fd3-ac7b-6ca838a36198-public-tls-certs\") pod \"keystone-f7dcf4bcb-h6jf8\" (UID: \"de3c3908-5942-4fd3-ac7b-6ca838a36198\") " pod="openstack/keystone-f7dcf4bcb-h6jf8" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.791384 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/de3c3908-5942-4fd3-ac7b-6ca838a36198-internal-tls-certs\") pod \"keystone-f7dcf4bcb-h6jf8\" (UID: \"de3c3908-5942-4fd3-ac7b-6ca838a36198\") " pod="openstack/keystone-f7dcf4bcb-h6jf8" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.888936 4812 scope.go:117] "RemoveContainer" containerID="b3c8d79bb1d51b82d94578928b328f7fde590b268cc014d9eda7fcd30ce8654f" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.891684 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.893653 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de3c3908-5942-4fd3-ac7b-6ca838a36198-scripts\") pod \"keystone-f7dcf4bcb-h6jf8\" (UID: \"de3c3908-5942-4fd3-ac7b-6ca838a36198\") " pod="openstack/keystone-f7dcf4bcb-h6jf8" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.893736 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hcmv\" (UniqueName: \"kubernetes.io/projected/ebf72004-b885-40eb-94ca-bce1652d96c1-kube-api-access-9hcmv\") pod \"placement-5dd887c4d-zfnsh\" (UID: \"ebf72004-b885-40eb-94ca-bce1652d96c1\") " pod="openstack/placement-5dd887c4d-zfnsh" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.893769 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/de3c3908-5942-4fd3-ac7b-6ca838a36198-credential-keys\") pod \"keystone-f7dcf4bcb-h6jf8\" (UID: \"de3c3908-5942-4fd3-ac7b-6ca838a36198\") " pod="openstack/keystone-f7dcf4bcb-h6jf8" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.893846 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/de3c3908-5942-4fd3-ac7b-6ca838a36198-fernet-keys\") pod \"keystone-f7dcf4bcb-h6jf8\" (UID: \"de3c3908-5942-4fd3-ac7b-6ca838a36198\") " pod="openstack/keystone-f7dcf4bcb-h6jf8" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.893882 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebf72004-b885-40eb-94ca-bce1652d96c1-scripts\") pod \"placement-5dd887c4d-zfnsh\" (UID: \"ebf72004-b885-40eb-94ca-bce1652d96c1\") " pod="openstack/placement-5dd887c4d-zfnsh" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.893986 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebf72004-b885-40eb-94ca-bce1652d96c1-public-tls-certs\") pod \"placement-5dd887c4d-zfnsh\" (UID: \"ebf72004-b885-40eb-94ca-bce1652d96c1\") " pod="openstack/placement-5dd887c4d-zfnsh" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.894015 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebf72004-b885-40eb-94ca-bce1652d96c1-config-data\") pod \"placement-5dd887c4d-zfnsh\" (UID: \"ebf72004-b885-40eb-94ca-bce1652d96c1\") " pod="openstack/placement-5dd887c4d-zfnsh" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.894108 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebf72004-b885-40eb-94ca-bce1652d96c1-logs\") pod \"placement-5dd887c4d-zfnsh\" (UID: \"ebf72004-b885-40eb-94ca-bce1652d96c1\") " pod="openstack/placement-5dd887c4d-zfnsh" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.894188 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de3c3908-5942-4fd3-ac7b-6ca838a36198-config-data\") pod \"keystone-f7dcf4bcb-h6jf8\" (UID: \"de3c3908-5942-4fd3-ac7b-6ca838a36198\") " pod="openstack/keystone-f7dcf4bcb-h6jf8" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.894231 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebf72004-b885-40eb-94ca-bce1652d96c1-internal-tls-certs\") pod \"placement-5dd887c4d-zfnsh\" (UID: \"ebf72004-b885-40eb-94ca-bce1652d96c1\") " pod="openstack/placement-5dd887c4d-zfnsh" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.894265 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/de3c3908-5942-4fd3-ac7b-6ca838a36198-public-tls-certs\") pod \"keystone-f7dcf4bcb-h6jf8\" (UID: \"de3c3908-5942-4fd3-ac7b-6ca838a36198\") " pod="openstack/keystone-f7dcf4bcb-h6jf8" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.894306 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/de3c3908-5942-4fd3-ac7b-6ca838a36198-internal-tls-certs\") pod \"keystone-f7dcf4bcb-h6jf8\" (UID: \"de3c3908-5942-4fd3-ac7b-6ca838a36198\") " pod="openstack/keystone-f7dcf4bcb-h6jf8" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.894462 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de3c3908-5942-4fd3-ac7b-6ca838a36198-combined-ca-bundle\") pod \"keystone-f7dcf4bcb-h6jf8\" (UID: \"de3c3908-5942-4fd3-ac7b-6ca838a36198\") " pod="openstack/keystone-f7dcf4bcb-h6jf8" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.894493 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7qnz\" (UniqueName: \"kubernetes.io/projected/de3c3908-5942-4fd3-ac7b-6ca838a36198-kube-api-access-b7qnz\") pod \"keystone-f7dcf4bcb-h6jf8\" (UID: \"de3c3908-5942-4fd3-ac7b-6ca838a36198\") " pod="openstack/keystone-f7dcf4bcb-h6jf8" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.894540 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebf72004-b885-40eb-94ca-bce1652d96c1-combined-ca-bundle\") pod \"placement-5dd887c4d-zfnsh\" (UID: \"ebf72004-b885-40eb-94ca-bce1652d96c1\") " pod="openstack/placement-5dd887c4d-zfnsh" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.915384 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebf72004-b885-40eb-94ca-bce1652d96c1-public-tls-certs\") pod \"placement-5dd887c4d-zfnsh\" (UID: \"ebf72004-b885-40eb-94ca-bce1652d96c1\") " pod="openstack/placement-5dd887c4d-zfnsh" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.916401 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebf72004-b885-40eb-94ca-bce1652d96c1-scripts\") pod \"placement-5dd887c4d-zfnsh\" (UID: \"ebf72004-b885-40eb-94ca-bce1652d96c1\") " pod="openstack/placement-5dd887c4d-zfnsh" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.919928 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebf72004-b885-40eb-94ca-bce1652d96c1-logs\") pod \"placement-5dd887c4d-zfnsh\" (UID: \"ebf72004-b885-40eb-94ca-bce1652d96c1\") " pod="openstack/placement-5dd887c4d-zfnsh" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.927835 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/de3c3908-5942-4fd3-ac7b-6ca838a36198-credential-keys\") pod \"keystone-f7dcf4bcb-h6jf8\" (UID: \"de3c3908-5942-4fd3-ac7b-6ca838a36198\") " pod="openstack/keystone-f7dcf4bcb-h6jf8" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.928514 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebf72004-b885-40eb-94ca-bce1652d96c1-config-data\") pod \"placement-5dd887c4d-zfnsh\" (UID: \"ebf72004-b885-40eb-94ca-bce1652d96c1\") " pod="openstack/placement-5dd887c4d-zfnsh" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.933580 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebf72004-b885-40eb-94ca-bce1652d96c1-internal-tls-certs\") pod \"placement-5dd887c4d-zfnsh\" (UID: \"ebf72004-b885-40eb-94ca-bce1652d96c1\") " pod="openstack/placement-5dd887c4d-zfnsh" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.934211 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/de3c3908-5942-4fd3-ac7b-6ca838a36198-public-tls-certs\") pod \"keystone-f7dcf4bcb-h6jf8\" (UID: \"de3c3908-5942-4fd3-ac7b-6ca838a36198\") " pod="openstack/keystone-f7dcf4bcb-h6jf8" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.945193 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de3c3908-5942-4fd3-ac7b-6ca838a36198-scripts\") pod \"keystone-f7dcf4bcb-h6jf8\" (UID: \"de3c3908-5942-4fd3-ac7b-6ca838a36198\") " pod="openstack/keystone-f7dcf4bcb-h6jf8" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.965866 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/de3c3908-5942-4fd3-ac7b-6ca838a36198-internal-tls-certs\") pod \"keystone-f7dcf4bcb-h6jf8\" (UID: \"de3c3908-5942-4fd3-ac7b-6ca838a36198\") " pod="openstack/keystone-f7dcf4bcb-h6jf8" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.967512 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7qnz\" (UniqueName: \"kubernetes.io/projected/de3c3908-5942-4fd3-ac7b-6ca838a36198-kube-api-access-b7qnz\") pod \"keystone-f7dcf4bcb-h6jf8\" (UID: \"de3c3908-5942-4fd3-ac7b-6ca838a36198\") " pod="openstack/keystone-f7dcf4bcb-h6jf8" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.968459 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de3c3908-5942-4fd3-ac7b-6ca838a36198-combined-ca-bundle\") pod \"keystone-f7dcf4bcb-h6jf8\" (UID: \"de3c3908-5942-4fd3-ac7b-6ca838a36198\") " pod="openstack/keystone-f7dcf4bcb-h6jf8" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.972820 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebf72004-b885-40eb-94ca-bce1652d96c1-combined-ca-bundle\") pod \"placement-5dd887c4d-zfnsh\" (UID: \"ebf72004-b885-40eb-94ca-bce1652d96c1\") " pod="openstack/placement-5dd887c4d-zfnsh" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.977949 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/de3c3908-5942-4fd3-ac7b-6ca838a36198-fernet-keys\") pod \"keystone-f7dcf4bcb-h6jf8\" (UID: \"de3c3908-5942-4fd3-ac7b-6ca838a36198\") " pod="openstack/keystone-f7dcf4bcb-h6jf8" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.986633 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de3c3908-5942-4fd3-ac7b-6ca838a36198-config-data\") pod \"keystone-f7dcf4bcb-h6jf8\" (UID: \"de3c3908-5942-4fd3-ac7b-6ca838a36198\") " pod="openstack/keystone-f7dcf4bcb-h6jf8" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.993506 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hcmv\" (UniqueName: \"kubernetes.io/projected/ebf72004-b885-40eb-94ca-bce1652d96c1-kube-api-access-9hcmv\") pod \"placement-5dd887c4d-zfnsh\" (UID: \"ebf72004-b885-40eb-94ca-bce1652d96c1\") " pod="openstack/placement-5dd887c4d-zfnsh" Feb 16 13:55:05 crc kubenswrapper[4812]: I0216 13:55:05.997880 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9031b253-b5e3-4837-8f3a-d04b16b0567d\") pod \"e2ac91dc-9185-46fe-9583-3355cb2be045\" (UID: \"e2ac91dc-9185-46fe-9583-3355cb2be045\") " Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:05.998151 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2ac91dc-9185-46fe-9583-3355cb2be045-combined-ca-bundle\") pod \"e2ac91dc-9185-46fe-9583-3355cb2be045\" (UID: \"e2ac91dc-9185-46fe-9583-3355cb2be045\") " Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:05.998183 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2ac91dc-9185-46fe-9583-3355cb2be045-config-data\") pod \"e2ac91dc-9185-46fe-9583-3355cb2be045\" (UID: \"e2ac91dc-9185-46fe-9583-3355cb2be045\") " Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:05.998218 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2ac91dc-9185-46fe-9583-3355cb2be045-logs\") pod \"e2ac91dc-9185-46fe-9583-3355cb2be045\" (UID: \"e2ac91dc-9185-46fe-9583-3355cb2be045\") " Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:05.998333 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e2ac91dc-9185-46fe-9583-3355cb2be045-httpd-run\") pod \"e2ac91dc-9185-46fe-9583-3355cb2be045\" (UID: \"e2ac91dc-9185-46fe-9583-3355cb2be045\") " Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:05.998378 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2ac91dc-9185-46fe-9583-3355cb2be045-scripts\") pod \"e2ac91dc-9185-46fe-9583-3355cb2be045\" (UID: \"e2ac91dc-9185-46fe-9583-3355cb2be045\") " Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.010369 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2ac91dc-9185-46fe-9583-3355cb2be045-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e2ac91dc-9185-46fe-9583-3355cb2be045" (UID: "e2ac91dc-9185-46fe-9583-3355cb2be045"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.010522 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2ac91dc-9185-46fe-9583-3355cb2be045-logs" (OuterVolumeSpecName: "logs") pod "e2ac91dc-9185-46fe-9583-3355cb2be045" (UID: "e2ac91dc-9185-46fe-9583-3355cb2be045"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.016404 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pp9k9\" (UniqueName: \"kubernetes.io/projected/e2ac91dc-9185-46fe-9583-3355cb2be045-kube-api-access-pp9k9\") pod \"e2ac91dc-9185-46fe-9583-3355cb2be045\" (UID: \"e2ac91dc-9185-46fe-9583-3355cb2be045\") " Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.017349 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2ac91dc-9185-46fe-9583-3355cb2be045-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.017367 4812 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e2ac91dc-9185-46fe-9583-3355cb2be045-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.092559 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2ac91dc-9185-46fe-9583-3355cb2be045-scripts" (OuterVolumeSpecName: "scripts") pod "e2ac91dc-9185-46fe-9583-3355cb2be045" (UID: "e2ac91dc-9185-46fe-9583-3355cb2be045"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.103560 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2ac91dc-9185-46fe-9583-3355cb2be045-kube-api-access-pp9k9" (OuterVolumeSpecName: "kube-api-access-pp9k9") pod "e2ac91dc-9185-46fe-9583-3355cb2be045" (UID: "e2ac91dc-9185-46fe-9583-3355cb2be045"). InnerVolumeSpecName "kube-api-access-pp9k9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.148354 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2ac91dc-9185-46fe-9583-3355cb2be045-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.148856 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pp9k9\" (UniqueName: \"kubernetes.io/projected/e2ac91dc-9185-46fe-9583-3355cb2be045-kube-api-access-pp9k9\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:06 crc kubenswrapper[4812]: E0216 13:55:06.268553 4812 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d0e0e26_2608_436e_847d_d4bee61b1d85.slice/crio-3669678055f51e5a5b4fc33edb09b6d0f5a635e7a5e68032f46d3a7ebdb46b85\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d0e0e26_2608_436e_847d_d4bee61b1d85.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3e61e08_7ed1_43ed_a137_910b10e85e36.slice/crio-8c7bfb705356ea1ff433f2fae322fc5b4bae99c61cdf8d9d3212fe10f9cee03b\": RecentStats: unable to find data in memory cache]" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.317089 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9031b253-b5e3-4837-8f3a-d04b16b0567d" (OuterVolumeSpecName: "glance") pod "e2ac91dc-9185-46fe-9583-3355cb2be045" (UID: "e2ac91dc-9185-46fe-9583-3355cb2be045"). InnerVolumeSpecName "pvc-9031b253-b5e3-4837-8f3a-d04b16b0567d". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.356635 4812 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-9031b253-b5e3-4837-8f3a-d04b16b0567d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9031b253-b5e3-4837-8f3a-d04b16b0567d\") on node \"crc\" " Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.389212 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2ac91dc-9185-46fe-9583-3355cb2be045-config-data" (OuterVolumeSpecName: "config-data") pod "e2ac91dc-9185-46fe-9583-3355cb2be045" (UID: "e2ac91dc-9185-46fe-9583-3355cb2be045"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.400377 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2ac91dc-9185-46fe-9583-3355cb2be045-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2ac91dc-9185-46fe-9583-3355cb2be045" (UID: "e2ac91dc-9185-46fe-9583-3355cb2be045"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.422638 4812 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.422919 4812 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-9031b253-b5e3-4837-8f3a-d04b16b0567d" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9031b253-b5e3-4837-8f3a-d04b16b0567d") on node "crc" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.459872 4812 reconciler_common.go:293] "Volume detached for volume \"pvc-9031b253-b5e3-4837-8f3a-d04b16b0567d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9031b253-b5e3-4837-8f3a-d04b16b0567d\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.460378 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2ac91dc-9185-46fe-9583-3355cb2be045-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.460400 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2ac91dc-9185-46fe-9583-3355cb2be045-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.510508 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e2ac91dc-9185-46fe-9583-3355cb2be045","Type":"ContainerDied","Data":"7ea0ad73137a07229d805e2f50fd66ed9261695bf7c16a0436f1f62d65afaba9"} Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.510570 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.510599 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.510620 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:55:06 crc kubenswrapper[4812]: E0216 13:55:06.511090 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2ac91dc-9185-46fe-9583-3355cb2be045" containerName="glance-log" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.511115 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2ac91dc-9185-46fe-9583-3355cb2be045" containerName="glance-log" Feb 16 13:55:06 crc kubenswrapper[4812]: E0216 13:55:06.511169 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2ac91dc-9185-46fe-9583-3355cb2be045" containerName="glance-httpd" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.511179 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2ac91dc-9185-46fe-9583-3355cb2be045" containerName="glance-httpd" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.511415 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2ac91dc-9185-46fe-9583-3355cb2be045" containerName="glance-httpd" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.511464 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2ac91dc-9185-46fe-9583-3355cb2be045" containerName="glance-log" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.512762 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.512792 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s95g8" event={"ID":"b4a72604-ad70-4ca7-97fc-582483d19fd1","Type":"ContainerStarted","Data":"4e95b84b6fcefd373e0f1c7648a15892ccddb4a7db1f37a6c1ce0a029896636b"} Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.512899 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.517340 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.518669 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.535803 4812 scope.go:117] "RemoveContainer" containerID="bd870744e6d645686b23ecaf761646cbeb08e898465be552377c3334631d1441" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.545362 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.581262 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.607011 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.621709 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.629921 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.638974 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.639771 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.640026 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.640038 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-m7q56" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.640773 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.641119 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.641415 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.652134 4812 scope.go:117] "RemoveContainer" containerID="dbc94bc36e10a7e5575ff4b9e2e37970b15f5c898051aba63bde1fc8308197df" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.653356 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.654953 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.665628 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9999e426-9507-4791-8468-ea110c308f85-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9999e426-9507-4791-8468-ea110c308f85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.667155 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9999e426-9507-4791-8468-ea110c308f85-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9999e426-9507-4791-8468-ea110c308f85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.667271 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9999e426-9507-4791-8468-ea110c308f85-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9999e426-9507-4791-8468-ea110c308f85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.667716 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cttcb\" (UniqueName: \"kubernetes.io/projected/9999e426-9507-4791-8468-ea110c308f85-kube-api-access-cttcb\") pod \"glance-default-internal-api-0\" (UID: \"9999e426-9507-4791-8468-ea110c308f85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.667823 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9999e426-9507-4791-8468-ea110c308f85-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9999e426-9507-4791-8468-ea110c308f85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.668000 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9999e426-9507-4791-8468-ea110c308f85-logs\") pod \"glance-default-internal-api-0\" (UID: \"9999e426-9507-4791-8468-ea110c308f85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.668977 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2bcbbffc-f82a-4bc1-865f-aace29f12e67\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bcbbffc-f82a-4bc1-865f-aace29f12e67\") pod \"glance-default-internal-api-0\" (UID: \"9999e426-9507-4791-8468-ea110c308f85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.669156 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9999e426-9507-4791-8468-ea110c308f85-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9999e426-9507-4791-8468-ea110c308f85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.751301 4812 scope.go:117] "RemoveContainer" containerID="a4b8033b28140c65c05eb6312b41cbd9e352de0d60ecf68e24e8620e6ba4c6a9" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.777545 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9999e426-9507-4791-8468-ea110c308f85-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9999e426-9507-4791-8468-ea110c308f85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.777631 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e3116255-f9dd-4ce3-bf47-779d963bbb98-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e3116255-f9dd-4ce3-bf47-779d963bbb98\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.777854 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2bcbbffc-f82a-4bc1-865f-aace29f12e67\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bcbbffc-f82a-4bc1-865f-aace29f12e67\") pod \"glance-default-internal-api-0\" (UID: \"9999e426-9507-4791-8468-ea110c308f85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.777901 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9999e426-9507-4791-8468-ea110c308f85-logs\") pod \"glance-default-internal-api-0\" (UID: \"9999e426-9507-4791-8468-ea110c308f85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.777958 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww8rk\" (UniqueName: \"kubernetes.io/projected/e3116255-f9dd-4ce3-bf47-779d963bbb98-kube-api-access-ww8rk\") pod \"prometheus-metric-storage-0\" (UID: \"e3116255-f9dd-4ce3-bf47-779d963bbb98\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.778000 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e3116255-f9dd-4ce3-bf47-779d963bbb98-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e3116255-f9dd-4ce3-bf47-779d963bbb98\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.778049 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3116255-f9dd-4ce3-bf47-779d963bbb98-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"e3116255-f9dd-4ce3-bf47-779d963bbb98\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.778119 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e3116255-f9dd-4ce3-bf47-779d963bbb98-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"e3116255-f9dd-4ce3-bf47-779d963bbb98\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.778172 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9999e426-9507-4791-8468-ea110c308f85-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9999e426-9507-4791-8468-ea110c308f85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.778198 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e3116255-f9dd-4ce3-bf47-779d963bbb98-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e3116255-f9dd-4ce3-bf47-779d963bbb98\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.778313 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e3116255-f9dd-4ce3-bf47-779d963bbb98-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e3116255-f9dd-4ce3-bf47-779d963bbb98\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.778404 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-149889e2-65b8-4663-a4ac-a48e48736700\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-149889e2-65b8-4663-a4ac-a48e48736700\") pod \"prometheus-metric-storage-0\" (UID: \"e3116255-f9dd-4ce3-bf47-779d963bbb98\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.778467 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e3116255-f9dd-4ce3-bf47-779d963bbb98-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e3116255-f9dd-4ce3-bf47-779d963bbb98\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.778552 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9999e426-9507-4791-8468-ea110c308f85-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9999e426-9507-4791-8468-ea110c308f85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.778585 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e3116255-f9dd-4ce3-bf47-779d963bbb98-config\") pod \"prometheus-metric-storage-0\" (UID: \"e3116255-f9dd-4ce3-bf47-779d963bbb98\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.778654 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e3116255-f9dd-4ce3-bf47-779d963bbb98-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"e3116255-f9dd-4ce3-bf47-779d963bbb98\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.778733 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e3116255-f9dd-4ce3-bf47-779d963bbb98-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"e3116255-f9dd-4ce3-bf47-779d963bbb98\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.778802 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9999e426-9507-4791-8468-ea110c308f85-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9999e426-9507-4791-8468-ea110c308f85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.778874 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9999e426-9507-4791-8468-ea110c308f85-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9999e426-9507-4791-8468-ea110c308f85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.778978 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e3116255-f9dd-4ce3-bf47-779d963bbb98-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"e3116255-f9dd-4ce3-bf47-779d963bbb98\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.779058 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cttcb\" (UniqueName: \"kubernetes.io/projected/9999e426-9507-4791-8468-ea110c308f85-kube-api-access-cttcb\") pod \"glance-default-internal-api-0\" (UID: \"9999e426-9507-4791-8468-ea110c308f85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.781657 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9999e426-9507-4791-8468-ea110c308f85-logs\") pod \"glance-default-internal-api-0\" (UID: \"9999e426-9507-4791-8468-ea110c308f85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.782043 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9999e426-9507-4791-8468-ea110c308f85-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9999e426-9507-4791-8468-ea110c308f85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.793403 4812 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.793482 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2bcbbffc-f82a-4bc1-865f-aace29f12e67\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bcbbffc-f82a-4bc1-865f-aace29f12e67\") pod \"glance-default-internal-api-0\" (UID: \"9999e426-9507-4791-8468-ea110c308f85\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6691502de4876dbd0d40188b23458c72f9080870e675ce533942e270fddd7230/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.799726 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5dd887c4d-zfnsh" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.800299 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9999e426-9507-4791-8468-ea110c308f85-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9999e426-9507-4791-8468-ea110c308f85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.800703 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9999e426-9507-4791-8468-ea110c308f85-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9999e426-9507-4791-8468-ea110c308f85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.807586 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9999e426-9507-4791-8468-ea110c308f85-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9999e426-9507-4791-8468-ea110c308f85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.813931 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9999e426-9507-4791-8468-ea110c308f85-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9999e426-9507-4791-8468-ea110c308f85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.816872 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cttcb\" (UniqueName: \"kubernetes.io/projected/9999e426-9507-4791-8468-ea110c308f85-kube-api-access-cttcb\") pod \"glance-default-internal-api-0\" (UID: \"9999e426-9507-4791-8468-ea110c308f85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.836763 4812 scope.go:117] "RemoveContainer" containerID="d0b552259f2f9e2a0aaa44f8d9548d65856b76705abfe6ec9e4fc1b7b1aa744d" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.870960 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2bcbbffc-f82a-4bc1-865f-aace29f12e67\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bcbbffc-f82a-4bc1-865f-aace29f12e67\") pod \"glance-default-internal-api-0\" (UID: \"9999e426-9507-4791-8468-ea110c308f85\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.882516 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-149889e2-65b8-4663-a4ac-a48e48736700\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-149889e2-65b8-4663-a4ac-a48e48736700\") pod \"prometheus-metric-storage-0\" (UID: \"e3116255-f9dd-4ce3-bf47-779d963bbb98\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.882634 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e3116255-f9dd-4ce3-bf47-779d963bbb98-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e3116255-f9dd-4ce3-bf47-779d963bbb98\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.882761 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e3116255-f9dd-4ce3-bf47-779d963bbb98-config\") pod \"prometheus-metric-storage-0\" (UID: \"e3116255-f9dd-4ce3-bf47-779d963bbb98\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.882843 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e3116255-f9dd-4ce3-bf47-779d963bbb98-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"e3116255-f9dd-4ce3-bf47-779d963bbb98\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.882917 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e3116255-f9dd-4ce3-bf47-779d963bbb98-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"e3116255-f9dd-4ce3-bf47-779d963bbb98\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.883075 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e3116255-f9dd-4ce3-bf47-779d963bbb98-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"e3116255-f9dd-4ce3-bf47-779d963bbb98\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.883254 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e3116255-f9dd-4ce3-bf47-779d963bbb98-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e3116255-f9dd-4ce3-bf47-779d963bbb98\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.885059 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e3116255-f9dd-4ce3-bf47-779d963bbb98-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"e3116255-f9dd-4ce3-bf47-779d963bbb98\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.890407 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e3116255-f9dd-4ce3-bf47-779d963bbb98-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"e3116255-f9dd-4ce3-bf47-779d963bbb98\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.891814 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f7dcf4bcb-h6jf8" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.894322 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww8rk\" (UniqueName: \"kubernetes.io/projected/e3116255-f9dd-4ce3-bf47-779d963bbb98-kube-api-access-ww8rk\") pod \"prometheus-metric-storage-0\" (UID: \"e3116255-f9dd-4ce3-bf47-779d963bbb98\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.894402 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e3116255-f9dd-4ce3-bf47-779d963bbb98-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e3116255-f9dd-4ce3-bf47-779d963bbb98\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.894465 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3116255-f9dd-4ce3-bf47-779d963bbb98-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"e3116255-f9dd-4ce3-bf47-779d963bbb98\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.894527 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e3116255-f9dd-4ce3-bf47-779d963bbb98-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"e3116255-f9dd-4ce3-bf47-779d963bbb98\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.894579 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e3116255-f9dd-4ce3-bf47-779d963bbb98-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e3116255-f9dd-4ce3-bf47-779d963bbb98\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.894740 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e3116255-f9dd-4ce3-bf47-779d963bbb98-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e3116255-f9dd-4ce3-bf47-779d963bbb98\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.897493 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e3116255-f9dd-4ce3-bf47-779d963bbb98-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e3116255-f9dd-4ce3-bf47-779d963bbb98\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.906895 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e3116255-f9dd-4ce3-bf47-779d963bbb98-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e3116255-f9dd-4ce3-bf47-779d963bbb98\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.907266 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e3116255-f9dd-4ce3-bf47-779d963bbb98-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"e3116255-f9dd-4ce3-bf47-779d963bbb98\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.907555 4812 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.907618 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-149889e2-65b8-4663-a4ac-a48e48736700\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-149889e2-65b8-4663-a4ac-a48e48736700\") pod \"prometheus-metric-storage-0\" (UID: \"e3116255-f9dd-4ce3-bf47-779d963bbb98\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2b50b93c959874b648bd27e3349ab287881b6c268869bdab57c6de3e2a9a9419/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.907671 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3116255-f9dd-4ce3-bf47-779d963bbb98-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"e3116255-f9dd-4ce3-bf47-779d963bbb98\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.908210 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e3116255-f9dd-4ce3-bf47-779d963bbb98-config\") pod \"prometheus-metric-storage-0\" (UID: \"e3116255-f9dd-4ce3-bf47-779d963bbb98\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.908886 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e3116255-f9dd-4ce3-bf47-779d963bbb98-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e3116255-f9dd-4ce3-bf47-779d963bbb98\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.911543 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e3116255-f9dd-4ce3-bf47-779d963bbb98-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e3116255-f9dd-4ce3-bf47-779d963bbb98\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.912694 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e3116255-f9dd-4ce3-bf47-779d963bbb98-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e3116255-f9dd-4ce3-bf47-779d963bbb98\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.917528 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e3116255-f9dd-4ce3-bf47-779d963bbb98-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"e3116255-f9dd-4ce3-bf47-779d963bbb98\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:06 crc kubenswrapper[4812]: I0216 13:55:06.920818 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 13:55:07 crc kubenswrapper[4812]: I0216 13:55:07.105311 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww8rk\" (UniqueName: \"kubernetes.io/projected/e3116255-f9dd-4ce3-bf47-779d963bbb98-kube-api-access-ww8rk\") pod \"prometheus-metric-storage-0\" (UID: \"e3116255-f9dd-4ce3-bf47-779d963bbb98\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:07 crc kubenswrapper[4812]: E0216 13:55:07.114471 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 13:55:07 crc kubenswrapper[4812]: I0216 13:55:07.114615 4812 scope.go:117] "RemoveContainer" containerID="c0ab0ed613811f60e30fe0f01333b738d8807375c05894ada81d91826028ce30" Feb 16 13:55:07 crc kubenswrapper[4812]: I0216 13:55:07.244437 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-149889e2-65b8-4663-a4ac-a48e48736700\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-149889e2-65b8-4663-a4ac-a48e48736700\") pod \"prometheus-metric-storage-0\" (UID: \"e3116255-f9dd-4ce3-bf47-779d963bbb98\") " pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:07 crc kubenswrapper[4812]: I0216 13:55:07.332597 4812 generic.go:334] "Generic (PLEG): container finished" podID="b4a72604-ad70-4ca7-97fc-582483d19fd1" containerID="12662c8bde7f09c10ee913f1cd070f8770d38c5f52d13332a248cbc0e3053bec" exitCode=0 Feb 16 13:55:07 crc kubenswrapper[4812]: I0216 13:55:07.332808 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s95g8" event={"ID":"b4a72604-ad70-4ca7-97fc-582483d19fd1","Type":"ContainerDied","Data":"12662c8bde7f09c10ee913f1cd070f8770d38c5f52d13332a248cbc0e3053bec"} Feb 16 13:55:07 crc kubenswrapper[4812]: I0216 13:55:07.365147 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-p4bgr" event={"ID":"dd76f722-eb61-4676-9456-9a9bb443ef16","Type":"ContainerStarted","Data":"8a9722f9ebba8ea6d76847ea76a8f1971a76074357b94ead45ba53cb9e0beca4"} Feb 16 13:55:07 crc kubenswrapper[4812]: I0216 13:55:07.365360 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:07 crc kubenswrapper[4812]: I0216 13:55:07.402539 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4c359d03-e59e-4b85-8599-826a340acc8f","Type":"ContainerStarted","Data":"c09fb4ab62e415bd3b7ec266d85d1ffde89c642a50f58c1ff161bf2912f3944c"} Feb 16 13:55:07 crc kubenswrapper[4812]: I0216 13:55:07.417752 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 13:55:07 crc kubenswrapper[4812]: I0216 13:55:07.534643 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-p4bgr" podStartSLOduration=7.100409789 podStartE2EDuration="1m7.534594574s" podCreationTimestamp="2026-02-16 13:54:00 +0000 UTC" firstStartedPulling="2026-02-16 13:54:04.20329981 +0000 UTC m=+1333.267630511" lastFinishedPulling="2026-02-16 13:55:04.637484595 +0000 UTC m=+1393.701815296" observedRunningTime="2026-02-16 13:55:07.427693347 +0000 UTC m=+1396.492024048" watchObservedRunningTime="2026-02-16 13:55:07.534594574 +0000 UTC m=+1396.598925275" Feb 16 13:55:07 crc kubenswrapper[4812]: I0216 13:55:07.580590 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:55:07 crc kubenswrapper[4812]: I0216 13:55:07.595613 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:55:07 crc kubenswrapper[4812]: I0216 13:55:07.653099 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:55:07 crc kubenswrapper[4812]: I0216 13:55:07.783558 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 13:55:07 crc kubenswrapper[4812]: I0216 13:55:07.808435 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 13:55:07 crc kubenswrapper[4812]: I0216 13:55:07.817458 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 13:55:07 crc kubenswrapper[4812]: I0216 13:55:07.898302 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:55:07 crc kubenswrapper[4812]: I0216 13:55:07.963831 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a79d4b09-3b4f-4594-bda3-f219239f9471-config-data\") pod \"glance-default-external-api-0\" (UID: \"a79d4b09-3b4f-4594-bda3-f219239f9471\") " pod="openstack/glance-default-external-api-0" Feb 16 13:55:07 crc kubenswrapper[4812]: I0216 13:55:07.963944 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfrfb\" (UniqueName: \"kubernetes.io/projected/a79d4b09-3b4f-4594-bda3-f219239f9471-kube-api-access-dfrfb\") pod \"glance-default-external-api-0\" (UID: \"a79d4b09-3b4f-4594-bda3-f219239f9471\") " pod="openstack/glance-default-external-api-0" Feb 16 13:55:07 crc kubenswrapper[4812]: I0216 13:55:07.964092 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a79d4b09-3b4f-4594-bda3-f219239f9471-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a79d4b09-3b4f-4594-bda3-f219239f9471\") " pod="openstack/glance-default-external-api-0" Feb 16 13:55:07 crc kubenswrapper[4812]: I0216 13:55:07.964125 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a79d4b09-3b4f-4594-bda3-f219239f9471-scripts\") pod \"glance-default-external-api-0\" (UID: \"a79d4b09-3b4f-4594-bda3-f219239f9471\") " pod="openstack/glance-default-external-api-0" Feb 16 13:55:07 crc kubenswrapper[4812]: I0216 13:55:07.964149 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a79d4b09-3b4f-4594-bda3-f219239f9471-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a79d4b09-3b4f-4594-bda3-f219239f9471\") " pod="openstack/glance-default-external-api-0" Feb 16 13:55:07 crc kubenswrapper[4812]: I0216 13:55:07.964169 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a79d4b09-3b4f-4594-bda3-f219239f9471-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a79d4b09-3b4f-4594-bda3-f219239f9471\") " pod="openstack/glance-default-external-api-0" Feb 16 13:55:07 crc kubenswrapper[4812]: I0216 13:55:07.964213 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a79d4b09-3b4f-4594-bda3-f219239f9471-logs\") pod \"glance-default-external-api-0\" (UID: \"a79d4b09-3b4f-4594-bda3-f219239f9471\") " pod="openstack/glance-default-external-api-0" Feb 16 13:55:07 crc kubenswrapper[4812]: I0216 13:55:07.964234 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9031b253-b5e3-4837-8f3a-d04b16b0567d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9031b253-b5e3-4837-8f3a-d04b16b0567d\") pod \"glance-default-external-api-0\" (UID: \"a79d4b09-3b4f-4594-bda3-f219239f9471\") " pod="openstack/glance-default-external-api-0" Feb 16 13:55:07 crc kubenswrapper[4812]: I0216 13:55:07.975286 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d0e0e26-2608-436e-847d-d4bee61b1d85" path="/var/lib/kubelet/pods/7d0e0e26-2608-436e-847d-d4bee61b1d85/volumes" Feb 16 13:55:07 crc kubenswrapper[4812]: I0216 13:55:07.977735 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e02a9868-e12c-4a65-9ba5-4a5965131b5b" path="/var/lib/kubelet/pods/e02a9868-e12c-4a65-9ba5-4a5965131b5b/volumes" Feb 16 13:55:07 crc kubenswrapper[4812]: I0216 13:55:07.979922 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2ac91dc-9185-46fe-9583-3355cb2be045" path="/var/lib/kubelet/pods/e2ac91dc-9185-46fe-9583-3355cb2be045/volumes" Feb 16 13:55:08 crc kubenswrapper[4812]: I0216 13:55:08.070503 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfrfb\" (UniqueName: \"kubernetes.io/projected/a79d4b09-3b4f-4594-bda3-f219239f9471-kube-api-access-dfrfb\") pod \"glance-default-external-api-0\" (UID: \"a79d4b09-3b4f-4594-bda3-f219239f9471\") " pod="openstack/glance-default-external-api-0" Feb 16 13:55:08 crc kubenswrapper[4812]: I0216 13:55:08.070684 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a79d4b09-3b4f-4594-bda3-f219239f9471-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a79d4b09-3b4f-4594-bda3-f219239f9471\") " pod="openstack/glance-default-external-api-0" Feb 16 13:55:08 crc kubenswrapper[4812]: I0216 13:55:08.070714 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a79d4b09-3b4f-4594-bda3-f219239f9471-scripts\") pod \"glance-default-external-api-0\" (UID: \"a79d4b09-3b4f-4594-bda3-f219239f9471\") " pod="openstack/glance-default-external-api-0" Feb 16 13:55:08 crc kubenswrapper[4812]: I0216 13:55:08.070756 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a79d4b09-3b4f-4594-bda3-f219239f9471-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a79d4b09-3b4f-4594-bda3-f219239f9471\") " pod="openstack/glance-default-external-api-0" Feb 16 13:55:08 crc kubenswrapper[4812]: I0216 13:55:08.070780 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a79d4b09-3b4f-4594-bda3-f219239f9471-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a79d4b09-3b4f-4594-bda3-f219239f9471\") " pod="openstack/glance-default-external-api-0" Feb 16 13:55:08 crc kubenswrapper[4812]: I0216 13:55:08.070823 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a79d4b09-3b4f-4594-bda3-f219239f9471-logs\") pod \"glance-default-external-api-0\" (UID: \"a79d4b09-3b4f-4594-bda3-f219239f9471\") " pod="openstack/glance-default-external-api-0" Feb 16 13:55:08 crc kubenswrapper[4812]: I0216 13:55:08.070846 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9031b253-b5e3-4837-8f3a-d04b16b0567d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9031b253-b5e3-4837-8f3a-d04b16b0567d\") pod \"glance-default-external-api-0\" (UID: \"a79d4b09-3b4f-4594-bda3-f219239f9471\") " pod="openstack/glance-default-external-api-0" Feb 16 13:55:08 crc kubenswrapper[4812]: I0216 13:55:08.070942 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a79d4b09-3b4f-4594-bda3-f219239f9471-config-data\") pod \"glance-default-external-api-0\" (UID: \"a79d4b09-3b4f-4594-bda3-f219239f9471\") " pod="openstack/glance-default-external-api-0" Feb 16 13:55:08 crc kubenswrapper[4812]: I0216 13:55:08.073228 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a79d4b09-3b4f-4594-bda3-f219239f9471-logs\") pod \"glance-default-external-api-0\" (UID: \"a79d4b09-3b4f-4594-bda3-f219239f9471\") " pod="openstack/glance-default-external-api-0" Feb 16 13:55:08 crc kubenswrapper[4812]: I0216 13:55:08.073646 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a79d4b09-3b4f-4594-bda3-f219239f9471-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a79d4b09-3b4f-4594-bda3-f219239f9471\") " pod="openstack/glance-default-external-api-0" Feb 16 13:55:08 crc kubenswrapper[4812]: I0216 13:55:08.085130 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a79d4b09-3b4f-4594-bda3-f219239f9471-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a79d4b09-3b4f-4594-bda3-f219239f9471\") " pod="openstack/glance-default-external-api-0" Feb 16 13:55:08 crc kubenswrapper[4812]: I0216 13:55:08.087656 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a79d4b09-3b4f-4594-bda3-f219239f9471-scripts\") pod \"glance-default-external-api-0\" (UID: \"a79d4b09-3b4f-4594-bda3-f219239f9471\") " pod="openstack/glance-default-external-api-0" Feb 16 13:55:08 crc kubenswrapper[4812]: I0216 13:55:08.088417 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a79d4b09-3b4f-4594-bda3-f219239f9471-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a79d4b09-3b4f-4594-bda3-f219239f9471\") " pod="openstack/glance-default-external-api-0" Feb 16 13:55:08 crc kubenswrapper[4812]: I0216 13:55:08.093480 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5dd887c4d-zfnsh"] Feb 16 13:55:08 crc kubenswrapper[4812]: I0216 13:55:08.132285 4812 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 13:55:08 crc kubenswrapper[4812]: I0216 13:55:08.132340 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9031b253-b5e3-4837-8f3a-d04b16b0567d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9031b253-b5e3-4837-8f3a-d04b16b0567d\") pod \"glance-default-external-api-0\" (UID: \"a79d4b09-3b4f-4594-bda3-f219239f9471\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4ce26190c4ae61da75993487dc8cd464b862eed00b3412abb1c020ef48a7c392/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 16 13:55:08 crc kubenswrapper[4812]: I0216 13:55:08.139479 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfrfb\" (UniqueName: \"kubernetes.io/projected/a79d4b09-3b4f-4594-bda3-f219239f9471-kube-api-access-dfrfb\") pod \"glance-default-external-api-0\" (UID: \"a79d4b09-3b4f-4594-bda3-f219239f9471\") " pod="openstack/glance-default-external-api-0" Feb 16 13:55:08 crc kubenswrapper[4812]: I0216 13:55:08.139811 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a79d4b09-3b4f-4594-bda3-f219239f9471-config-data\") pod \"glance-default-external-api-0\" (UID: \"a79d4b09-3b4f-4594-bda3-f219239f9471\") " pod="openstack/glance-default-external-api-0" Feb 16 13:55:08 crc kubenswrapper[4812]: I0216 13:55:08.523287 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qj2kj" event={"ID":"d9d0140e-e353-40a3-8970-5007408f4cb8","Type":"ContainerStarted","Data":"672eba311a28c7448e9d6fe76a5309a2c3f2047236230c7bbc97c9cc32b8f3ec"} Feb 16 13:55:08 crc kubenswrapper[4812]: I0216 13:55:08.561579 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9031b253-b5e3-4837-8f3a-d04b16b0567d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9031b253-b5e3-4837-8f3a-d04b16b0567d\") pod \"glance-default-external-api-0\" (UID: \"a79d4b09-3b4f-4594-bda3-f219239f9471\") " pod="openstack/glance-default-external-api-0" Feb 16 13:55:08 crc kubenswrapper[4812]: I0216 13:55:08.576428 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-f7dcf4bcb-h6jf8"] Feb 16 13:55:08 crc kubenswrapper[4812]: I0216 13:55:08.598942 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5dd887c4d-zfnsh" event={"ID":"ebf72004-b885-40eb-94ca-bce1652d96c1","Type":"ContainerStarted","Data":"c005464224b7a243a46cbdcefbf58f9091d217fd54cdbb6d035788c2003367ad"} Feb 16 13:55:08 crc kubenswrapper[4812]: I0216 13:55:08.618820 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-qj2kj" podStartSLOduration=6.795216288 podStartE2EDuration="1m8.618782228s" podCreationTimestamp="2026-02-16 13:54:00 +0000 UTC" firstStartedPulling="2026-02-16 13:54:02.821054202 +0000 UTC m=+1331.885384903" lastFinishedPulling="2026-02-16 13:55:04.644620142 +0000 UTC m=+1393.708950843" observedRunningTime="2026-02-16 13:55:08.600738034 +0000 UTC m=+1397.665068755" watchObservedRunningTime="2026-02-16 13:55:08.618782228 +0000 UTC m=+1397.683112929" Feb 16 13:55:08 crc kubenswrapper[4812]: I0216 13:55:08.642365 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 13:55:09 crc kubenswrapper[4812]: I0216 13:55:09.309021 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:55:09 crc kubenswrapper[4812]: I0216 13:55:09.642928 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 13:55:09 crc kubenswrapper[4812]: I0216 13:55:09.718226 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f7dcf4bcb-h6jf8" event={"ID":"de3c3908-5942-4fd3-ac7b-6ca838a36198","Type":"ContainerStarted","Data":"78f2785f47435496c4bbe4a107db2c039a433afd7bd82cb6886b8d537f965c60"} Feb 16 13:55:09 crc kubenswrapper[4812]: I0216 13:55:09.718333 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f7dcf4bcb-h6jf8" event={"ID":"de3c3908-5942-4fd3-ac7b-6ca838a36198","Type":"ContainerStarted","Data":"3a4a30a7a36b61853385c22541b9fd68333fab492692443a711bdd87633e87b2"} Feb 16 13:55:09 crc kubenswrapper[4812]: I0216 13:55:09.725006 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-f7dcf4bcb-h6jf8" Feb 16 13:55:09 crc kubenswrapper[4812]: I0216 13:55:09.790596 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-f7dcf4bcb-h6jf8" podStartSLOduration=4.790538926 podStartE2EDuration="4.790538926s" podCreationTimestamp="2026-02-16 13:55:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:55:09.763086078 +0000 UTC m=+1398.827416779" watchObservedRunningTime="2026-02-16 13:55:09.790538926 +0000 UTC m=+1398.854869647" Feb 16 13:55:10 crc kubenswrapper[4812]: I0216 13:55:10.166187 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5dd887c4d-zfnsh" event={"ID":"ebf72004-b885-40eb-94ca-bce1652d96c1","Type":"ContainerStarted","Data":"f33c89b59ef51d4494f4fb0e3233676426e29b9509f7126a1839a7b0a9dbe451"} Feb 16 13:55:10 crc kubenswrapper[4812]: I0216 13:55:10.204731 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9999e426-9507-4791-8468-ea110c308f85","Type":"ContainerStarted","Data":"e4475daf6d965adc61138919c8ca058ed1b8b5b8b38823b488ebe00bdea996b1"} Feb 16 13:55:10 crc kubenswrapper[4812]: I0216 13:55:10.215111 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s95g8" event={"ID":"b4a72604-ad70-4ca7-97fc-582483d19fd1","Type":"ContainerStarted","Data":"cef25a4d5103caab9b062bab4abb0dc8020c944a78203aa562102a3bb3cc554b"} Feb 16 13:55:10 crc kubenswrapper[4812]: I0216 13:55:10.349302 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:55:10 crc kubenswrapper[4812]: W0216 13:55:10.418597 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda79d4b09_3b4f_4594_bda3_f219239f9471.slice/crio-943f2e033809e6e6acce8e0fe0f61dd7cf86892ee8f624c4bc9fabee6a00c5a8 WatchSource:0}: Error finding container 943f2e033809e6e6acce8e0fe0f61dd7cf86892ee8f624c4bc9fabee6a00c5a8: Status 404 returned error can't find the container with id 943f2e033809e6e6acce8e0fe0f61dd7cf86892ee8f624c4bc9fabee6a00c5a8 Feb 16 13:55:11 crc kubenswrapper[4812]: I0216 13:55:11.245838 4812 generic.go:334] "Generic (PLEG): container finished" podID="b4a72604-ad70-4ca7-97fc-582483d19fd1" containerID="cef25a4d5103caab9b062bab4abb0dc8020c944a78203aa562102a3bb3cc554b" exitCode=0 Feb 16 13:55:11 crc kubenswrapper[4812]: I0216 13:55:11.245909 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s95g8" event={"ID":"b4a72604-ad70-4ca7-97fc-582483d19fd1","Type":"ContainerDied","Data":"cef25a4d5103caab9b062bab4abb0dc8020c944a78203aa562102a3bb3cc554b"} Feb 16 13:55:11 crc kubenswrapper[4812]: I0216 13:55:11.254812 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a79d4b09-3b4f-4594-bda3-f219239f9471","Type":"ContainerStarted","Data":"943f2e033809e6e6acce8e0fe0f61dd7cf86892ee8f624c4bc9fabee6a00c5a8"} Feb 16 13:55:11 crc kubenswrapper[4812]: I0216 13:55:11.266188 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e3116255-f9dd-4ce3-bf47-779d963bbb98","Type":"ContainerStarted","Data":"39316c08a5350e10f7ba988568b98c62dc87e2ad30bb2c8b321b2ad0c7acf9c5"} Feb 16 13:55:12 crc kubenswrapper[4812]: I0216 13:55:12.311227 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5dd887c4d-zfnsh" event={"ID":"ebf72004-b885-40eb-94ca-bce1652d96c1","Type":"ContainerStarted","Data":"cc93fe2a9bb702e5e3ca0d79e36bea97cb6172995231ff61d0befbabc803bd4a"} Feb 16 13:55:12 crc kubenswrapper[4812]: I0216 13:55:12.314753 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5dd887c4d-zfnsh" Feb 16 13:55:12 crc kubenswrapper[4812]: I0216 13:55:12.314797 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5dd887c4d-zfnsh" Feb 16 13:55:12 crc kubenswrapper[4812]: I0216 13:55:12.332535 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9999e426-9507-4791-8468-ea110c308f85","Type":"ContainerStarted","Data":"f79d27d5276aeca25764d3d7d4ca2d9a7af51e5d9acbb3ae528f540c15be7e69"} Feb 16 13:55:12 crc kubenswrapper[4812]: I0216 13:55:12.344404 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s95g8" event={"ID":"b4a72604-ad70-4ca7-97fc-582483d19fd1","Type":"ContainerStarted","Data":"e3a12be9d8ac6087efeac66f8f824c59971594252fd744220f51230726a15a00"} Feb 16 13:55:12 crc kubenswrapper[4812]: I0216 13:55:12.345735 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s95g8" Feb 16 13:55:12 crc kubenswrapper[4812]: I0216 13:55:12.345784 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s95g8" Feb 16 13:55:12 crc kubenswrapper[4812]: I0216 13:55:12.363492 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5dd887c4d-zfnsh" podStartSLOduration=7.363465504 podStartE2EDuration="7.363465504s" podCreationTimestamp="2026-02-16 13:55:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:55:12.345041038 +0000 UTC m=+1401.409371759" watchObservedRunningTime="2026-02-16 13:55:12.363465504 +0000 UTC m=+1401.427796205" Feb 16 13:55:12 crc kubenswrapper[4812]: I0216 13:55:12.375743 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s95g8" podStartSLOduration=7.485276447 podStartE2EDuration="11.375701799s" podCreationTimestamp="2026-02-16 13:55:01 +0000 UTC" firstStartedPulling="2026-02-16 13:55:07.357043143 +0000 UTC m=+1396.421373844" lastFinishedPulling="2026-02-16 13:55:11.247468495 +0000 UTC m=+1400.311799196" observedRunningTime="2026-02-16 13:55:12.372084094 +0000 UTC m=+1401.436414795" watchObservedRunningTime="2026-02-16 13:55:12.375701799 +0000 UTC m=+1401.440032500" Feb 16 13:55:13 crc kubenswrapper[4812]: I0216 13:55:13.358849 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a79d4b09-3b4f-4594-bda3-f219239f9471","Type":"ContainerStarted","Data":"35213eae1d814955f16fa3ba42805f9dee1bddd82b8380c11c48e92e6ea732ef"} Feb 16 13:55:13 crc kubenswrapper[4812]: I0216 13:55:13.490544 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-s95g8" podUID="b4a72604-ad70-4ca7-97fc-582483d19fd1" containerName="registry-server" probeResult="failure" output=< Feb 16 13:55:13 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Feb 16 13:55:13 crc kubenswrapper[4812]: > Feb 16 13:55:14 crc kubenswrapper[4812]: I0216 13:55:14.382795 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9999e426-9507-4791-8468-ea110c308f85","Type":"ContainerStarted","Data":"966639997bb5b81146597bbb4562f7c4cc69926b4d8b4eb769338b6cc89a729b"} Feb 16 13:55:14 crc kubenswrapper[4812]: I0216 13:55:14.388522 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a79d4b09-3b4f-4594-bda3-f219239f9471","Type":"ContainerStarted","Data":"2d1456daa0d1baefd2d6d972a9e3f35748a009fd90208a4d1f18d777f484dcd7"} Feb 16 13:55:14 crc kubenswrapper[4812]: I0216 13:55:14.425026 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.424993035 podStartE2EDuration="9.424993035s" podCreationTimestamp="2026-02-16 13:55:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:55:14.420955808 +0000 UTC m=+1403.485286519" watchObservedRunningTime="2026-02-16 13:55:14.424993035 +0000 UTC m=+1403.489323736" Feb 16 13:55:14 crc kubenswrapper[4812]: I0216 13:55:14.455052 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.455024958 podStartE2EDuration="7.455024958s" podCreationTimestamp="2026-02-16 13:55:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:55:14.449140717 +0000 UTC m=+1403.513471438" watchObservedRunningTime="2026-02-16 13:55:14.455024958 +0000 UTC m=+1403.519355659" Feb 16 13:55:15 crc kubenswrapper[4812]: I0216 13:55:15.422701 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e3116255-f9dd-4ce3-bf47-779d963bbb98","Type":"ContainerStarted","Data":"d69d79311177b7da6862aee96773e69561edd5a71ea05e7465f24c06bc7f0478"} Feb 16 13:55:15 crc kubenswrapper[4812]: I0216 13:55:15.428404 4812 generic.go:334] "Generic (PLEG): container finished" podID="dd76f722-eb61-4676-9456-9a9bb443ef16" containerID="8a9722f9ebba8ea6d76847ea76a8f1971a76074357b94ead45ba53cb9e0beca4" exitCode=0 Feb 16 13:55:15 crc kubenswrapper[4812]: I0216 13:55:15.428495 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-p4bgr" event={"ID":"dd76f722-eb61-4676-9456-9a9bb443ef16","Type":"ContainerDied","Data":"8a9722f9ebba8ea6d76847ea76a8f1971a76074357b94ead45ba53cb9e0beca4"} Feb 16 13:55:15 crc kubenswrapper[4812]: I0216 13:55:15.494301 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-86c4db556-7x7cc" Feb 16 13:55:16 crc kubenswrapper[4812]: I0216 13:55:16.037210 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-869988d995-2jcsq"] Feb 16 13:55:16 crc kubenswrapper[4812]: I0216 13:55:16.038118 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-869988d995-2jcsq" podUID="1ad498b5-0999-4dc6-984f-154bd501f036" containerName="neutron-api" containerID="cri-o://b5194a351de0a3c2a69daa51d8f4faa9ed51ce45912a021a5e907e911f3ece08" gracePeriod=30 Feb 16 13:55:16 crc kubenswrapper[4812]: I0216 13:55:16.044750 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-869988d995-2jcsq" podUID="1ad498b5-0999-4dc6-984f-154bd501f036" containerName="neutron-httpd" containerID="cri-o://20c4b07745e3cd34844459e63d865503acf1c346a1e17adeaf4dfba5b05a6b3c" gracePeriod=30 Feb 16 13:55:16 crc kubenswrapper[4812]: I0216 13:55:16.073215 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5687b6b775-mt8dp"] Feb 16 13:55:16 crc kubenswrapper[4812]: I0216 13:55:16.082404 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5687b6b775-mt8dp" Feb 16 13:55:16 crc kubenswrapper[4812]: I0216 13:55:16.121604 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5687b6b775-mt8dp"] Feb 16 13:55:16 crc kubenswrapper[4812]: I0216 13:55:16.153230 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-869988d995-2jcsq" podUID="1ad498b5-0999-4dc6-984f-154bd501f036" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.170:9696/\": read tcp 10.217.0.2:57892->10.217.0.170:9696: read: connection reset by peer" Feb 16 13:55:16 crc kubenswrapper[4812]: I0216 13:55:16.178164 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9c203d1a-c01d-4dda-889c-4a09ea0c616c-config\") pod \"neutron-5687b6b775-mt8dp\" (UID: \"9c203d1a-c01d-4dda-889c-4a09ea0c616c\") " pod="openstack/neutron-5687b6b775-mt8dp" Feb 16 13:55:16 crc kubenswrapper[4812]: I0216 13:55:16.178226 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c203d1a-c01d-4dda-889c-4a09ea0c616c-ovndb-tls-certs\") pod \"neutron-5687b6b775-mt8dp\" (UID: \"9c203d1a-c01d-4dda-889c-4a09ea0c616c\") " pod="openstack/neutron-5687b6b775-mt8dp" Feb 16 13:55:16 crc kubenswrapper[4812]: I0216 13:55:16.178262 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c203d1a-c01d-4dda-889c-4a09ea0c616c-combined-ca-bundle\") pod \"neutron-5687b6b775-mt8dp\" (UID: \"9c203d1a-c01d-4dda-889c-4a09ea0c616c\") " pod="openstack/neutron-5687b6b775-mt8dp" Feb 16 13:55:16 crc kubenswrapper[4812]: I0216 13:55:16.178307 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9c203d1a-c01d-4dda-889c-4a09ea0c616c-httpd-config\") pod \"neutron-5687b6b775-mt8dp\" (UID: \"9c203d1a-c01d-4dda-889c-4a09ea0c616c\") " pod="openstack/neutron-5687b6b775-mt8dp" Feb 16 13:55:16 crc kubenswrapper[4812]: I0216 13:55:16.178351 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c203d1a-c01d-4dda-889c-4a09ea0c616c-public-tls-certs\") pod \"neutron-5687b6b775-mt8dp\" (UID: \"9c203d1a-c01d-4dda-889c-4a09ea0c616c\") " pod="openstack/neutron-5687b6b775-mt8dp" Feb 16 13:55:16 crc kubenswrapper[4812]: I0216 13:55:16.178392 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7b28\" (UniqueName: \"kubernetes.io/projected/9c203d1a-c01d-4dda-889c-4a09ea0c616c-kube-api-access-q7b28\") pod \"neutron-5687b6b775-mt8dp\" (UID: \"9c203d1a-c01d-4dda-889c-4a09ea0c616c\") " pod="openstack/neutron-5687b6b775-mt8dp" Feb 16 13:55:16 crc kubenswrapper[4812]: I0216 13:55:16.178575 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c203d1a-c01d-4dda-889c-4a09ea0c616c-internal-tls-certs\") pod \"neutron-5687b6b775-mt8dp\" (UID: \"9c203d1a-c01d-4dda-889c-4a09ea0c616c\") " pod="openstack/neutron-5687b6b775-mt8dp" Feb 16 13:55:16 crc kubenswrapper[4812]: I0216 13:55:16.281009 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c203d1a-c01d-4dda-889c-4a09ea0c616c-public-tls-certs\") pod \"neutron-5687b6b775-mt8dp\" (UID: \"9c203d1a-c01d-4dda-889c-4a09ea0c616c\") " pod="openstack/neutron-5687b6b775-mt8dp" Feb 16 13:55:16 crc kubenswrapper[4812]: I0216 13:55:16.281128 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7b28\" (UniqueName: \"kubernetes.io/projected/9c203d1a-c01d-4dda-889c-4a09ea0c616c-kube-api-access-q7b28\") pod \"neutron-5687b6b775-mt8dp\" (UID: \"9c203d1a-c01d-4dda-889c-4a09ea0c616c\") " pod="openstack/neutron-5687b6b775-mt8dp" Feb 16 13:55:16 crc kubenswrapper[4812]: I0216 13:55:16.281262 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c203d1a-c01d-4dda-889c-4a09ea0c616c-internal-tls-certs\") pod \"neutron-5687b6b775-mt8dp\" (UID: \"9c203d1a-c01d-4dda-889c-4a09ea0c616c\") " pod="openstack/neutron-5687b6b775-mt8dp" Feb 16 13:55:16 crc kubenswrapper[4812]: I0216 13:55:16.281410 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9c203d1a-c01d-4dda-889c-4a09ea0c616c-config\") pod \"neutron-5687b6b775-mt8dp\" (UID: \"9c203d1a-c01d-4dda-889c-4a09ea0c616c\") " pod="openstack/neutron-5687b6b775-mt8dp" Feb 16 13:55:16 crc kubenswrapper[4812]: I0216 13:55:16.281465 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c203d1a-c01d-4dda-889c-4a09ea0c616c-ovndb-tls-certs\") pod \"neutron-5687b6b775-mt8dp\" (UID: \"9c203d1a-c01d-4dda-889c-4a09ea0c616c\") " pod="openstack/neutron-5687b6b775-mt8dp" Feb 16 13:55:16 crc kubenswrapper[4812]: I0216 13:55:16.281494 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c203d1a-c01d-4dda-889c-4a09ea0c616c-combined-ca-bundle\") pod \"neutron-5687b6b775-mt8dp\" (UID: \"9c203d1a-c01d-4dda-889c-4a09ea0c616c\") " pod="openstack/neutron-5687b6b775-mt8dp" Feb 16 13:55:16 crc kubenswrapper[4812]: I0216 13:55:16.281546 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9c203d1a-c01d-4dda-889c-4a09ea0c616c-httpd-config\") pod \"neutron-5687b6b775-mt8dp\" (UID: \"9c203d1a-c01d-4dda-889c-4a09ea0c616c\") " pod="openstack/neutron-5687b6b775-mt8dp" Feb 16 13:55:16 crc kubenswrapper[4812]: I0216 13:55:16.293880 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c203d1a-c01d-4dda-889c-4a09ea0c616c-combined-ca-bundle\") pod \"neutron-5687b6b775-mt8dp\" (UID: \"9c203d1a-c01d-4dda-889c-4a09ea0c616c\") " pod="openstack/neutron-5687b6b775-mt8dp" Feb 16 13:55:16 crc kubenswrapper[4812]: I0216 13:55:16.295465 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9c203d1a-c01d-4dda-889c-4a09ea0c616c-config\") pod \"neutron-5687b6b775-mt8dp\" (UID: \"9c203d1a-c01d-4dda-889c-4a09ea0c616c\") " pod="openstack/neutron-5687b6b775-mt8dp" Feb 16 13:55:16 crc kubenswrapper[4812]: I0216 13:55:16.297296 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9c203d1a-c01d-4dda-889c-4a09ea0c616c-httpd-config\") pod \"neutron-5687b6b775-mt8dp\" (UID: \"9c203d1a-c01d-4dda-889c-4a09ea0c616c\") " pod="openstack/neutron-5687b6b775-mt8dp" Feb 16 13:55:16 crc kubenswrapper[4812]: I0216 13:55:16.304770 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c203d1a-c01d-4dda-889c-4a09ea0c616c-internal-tls-certs\") pod \"neutron-5687b6b775-mt8dp\" (UID: \"9c203d1a-c01d-4dda-889c-4a09ea0c616c\") " pod="openstack/neutron-5687b6b775-mt8dp" Feb 16 13:55:16 crc kubenswrapper[4812]: I0216 13:55:16.305145 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c203d1a-c01d-4dda-889c-4a09ea0c616c-public-tls-certs\") pod \"neutron-5687b6b775-mt8dp\" (UID: \"9c203d1a-c01d-4dda-889c-4a09ea0c616c\") " pod="openstack/neutron-5687b6b775-mt8dp" Feb 16 13:55:16 crc kubenswrapper[4812]: I0216 13:55:16.315631 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7b28\" (UniqueName: \"kubernetes.io/projected/9c203d1a-c01d-4dda-889c-4a09ea0c616c-kube-api-access-q7b28\") pod \"neutron-5687b6b775-mt8dp\" (UID: \"9c203d1a-c01d-4dda-889c-4a09ea0c616c\") " pod="openstack/neutron-5687b6b775-mt8dp" Feb 16 13:55:16 crc kubenswrapper[4812]: I0216 13:55:16.329404 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c203d1a-c01d-4dda-889c-4a09ea0c616c-ovndb-tls-certs\") pod \"neutron-5687b6b775-mt8dp\" (UID: \"9c203d1a-c01d-4dda-889c-4a09ea0c616c\") " pod="openstack/neutron-5687b6b775-mt8dp" Feb 16 13:55:16 crc kubenswrapper[4812]: I0216 13:55:16.431669 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5687b6b775-mt8dp" Feb 16 13:55:16 crc kubenswrapper[4812]: I0216 13:55:16.456692 4812 generic.go:334] "Generic (PLEG): container finished" podID="1ad498b5-0999-4dc6-984f-154bd501f036" containerID="20c4b07745e3cd34844459e63d865503acf1c346a1e17adeaf4dfba5b05a6b3c" exitCode=0 Feb 16 13:55:16 crc kubenswrapper[4812]: I0216 13:55:16.456951 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-869988d995-2jcsq" event={"ID":"1ad498b5-0999-4dc6-984f-154bd501f036","Type":"ContainerDied","Data":"20c4b07745e3cd34844459e63d865503acf1c346a1e17adeaf4dfba5b05a6b3c"} Feb 16 13:55:16 crc kubenswrapper[4812]: I0216 13:55:16.921413 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 13:55:16 crc kubenswrapper[4812]: I0216 13:55:16.921534 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 13:55:17 crc kubenswrapper[4812]: I0216 13:55:17.023194 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 13:55:17 crc kubenswrapper[4812]: I0216 13:55:17.023380 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 13:55:17 crc kubenswrapper[4812]: I0216 13:55:17.540097 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 13:55:17 crc kubenswrapper[4812]: I0216 13:55:17.543584 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 13:55:18 crc kubenswrapper[4812]: I0216 13:55:18.619064 4812 generic.go:334] "Generic (PLEG): container finished" podID="d9d0140e-e353-40a3-8970-5007408f4cb8" containerID="672eba311a28c7448e9d6fe76a5309a2c3f2047236230c7bbc97c9cc32b8f3ec" exitCode=0 Feb 16 13:55:18 crc kubenswrapper[4812]: I0216 13:55:18.619434 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qj2kj" event={"ID":"d9d0140e-e353-40a3-8970-5007408f4cb8","Type":"ContainerDied","Data":"672eba311a28c7448e9d6fe76a5309a2c3f2047236230c7bbc97c9cc32b8f3ec"} Feb 16 13:55:18 crc kubenswrapper[4812]: I0216 13:55:18.643746 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 13:55:18 crc kubenswrapper[4812]: I0216 13:55:18.643801 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 13:55:18 crc kubenswrapper[4812]: I0216 13:55:18.713164 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 13:55:18 crc kubenswrapper[4812]: I0216 13:55:18.714116 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 13:55:18 crc kubenswrapper[4812]: E0216 13:55:18.884474 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 13:55:19 crc kubenswrapper[4812]: I0216 13:55:19.501407 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-869988d995-2jcsq" podUID="1ad498b5-0999-4dc6-984f-154bd501f036" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.170:9696/\": dial tcp 10.217.0.170:9696: connect: connection refused" Feb 16 13:55:19 crc kubenswrapper[4812]: I0216 13:55:19.635369 4812 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 13:55:19 crc kubenswrapper[4812]: I0216 13:55:19.636154 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 13:55:19 crc kubenswrapper[4812]: I0216 13:55:19.636235 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 13:55:22 crc kubenswrapper[4812]: I0216 13:55:22.847182 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 13:55:22 crc kubenswrapper[4812]: I0216 13:55:22.848117 4812 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 13:55:22 crc kubenswrapper[4812]: I0216 13:55:22.854080 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 13:55:23 crc kubenswrapper[4812]: I0216 13:55:23.413828 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-s95g8" podUID="b4a72604-ad70-4ca7-97fc-582483d19fd1" containerName="registry-server" probeResult="failure" output=< Feb 16 13:55:23 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Feb 16 13:55:23 crc kubenswrapper[4812]: > Feb 16 13:55:24 crc kubenswrapper[4812]: I0216 13:55:24.821901 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 13:55:24 crc kubenswrapper[4812]: I0216 13:55:24.822045 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 13:55:26 crc kubenswrapper[4812]: E0216 13:55:26.669902 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Feb 16 13:55:26 crc kubenswrapper[4812]: E0216 13:55:26.671542 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fxsq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(4c359d03-e59e-4b85-8599-826a340acc8f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 13:55:26 crc kubenswrapper[4812]: E0216 13:55:26.672835 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"ceilometer-notification-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="4c359d03-e59e-4b85-8599-826a340acc8f" Feb 16 13:55:26 crc kubenswrapper[4812]: I0216 13:55:26.750567 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-p4bgr" event={"ID":"dd76f722-eb61-4676-9456-9a9bb443ef16","Type":"ContainerDied","Data":"9f919e783f6f0439740855420f94fd3d5ca6fa05edcc9d4f510a148ad002922e"} Feb 16 13:55:26 crc kubenswrapper[4812]: I0216 13:55:26.750635 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f919e783f6f0439740855420f94fd3d5ca6fa05edcc9d4f510a148ad002922e" Feb 16 13:55:26 crc kubenswrapper[4812]: I0216 13:55:26.756361 4812 generic.go:334] "Generic (PLEG): container finished" podID="1ad498b5-0999-4dc6-984f-154bd501f036" containerID="b5194a351de0a3c2a69daa51d8f4faa9ed51ce45912a021a5e907e911f3ece08" exitCode=0 Feb 16 13:55:26 crc kubenswrapper[4812]: I0216 13:55:26.756460 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-869988d995-2jcsq" event={"ID":"1ad498b5-0999-4dc6-984f-154bd501f036","Type":"ContainerDied","Data":"b5194a351de0a3c2a69daa51d8f4faa9ed51ce45912a021a5e907e911f3ece08"} Feb 16 13:55:26 crc kubenswrapper[4812]: I0216 13:55:26.759614 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qj2kj" event={"ID":"d9d0140e-e353-40a3-8970-5007408f4cb8","Type":"ContainerDied","Data":"34ce48dc251da52807c6f00c9bf5f69700656e45e007a7fc9a517214e9b5551c"} Feb 16 13:55:26 crc kubenswrapper[4812]: I0216 13:55:26.759673 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34ce48dc251da52807c6f00c9bf5f69700656e45e007a7fc9a517214e9b5551c" Feb 16 13:55:26 crc kubenswrapper[4812]: I0216 13:55:26.759840 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4c359d03-e59e-4b85-8599-826a340acc8f" containerName="sg-core" containerID="cri-o://c09fb4ab62e415bd3b7ec266d85d1ffde89c642a50f58c1ff161bf2912f3944c" gracePeriod=30 Feb 16 13:55:26 crc kubenswrapper[4812]: I0216 13:55:26.780159 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-p4bgr" Feb 16 13:55:26 crc kubenswrapper[4812]: I0216 13:55:26.828339 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qz7gm\" (UniqueName: \"kubernetes.io/projected/dd76f722-eb61-4676-9456-9a9bb443ef16-kube-api-access-qz7gm\") pod \"dd76f722-eb61-4676-9456-9a9bb443ef16\" (UID: \"dd76f722-eb61-4676-9456-9a9bb443ef16\") " Feb 16 13:55:26 crc kubenswrapper[4812]: I0216 13:55:26.828417 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dd76f722-eb61-4676-9456-9a9bb443ef16-db-sync-config-data\") pod \"dd76f722-eb61-4676-9456-9a9bb443ef16\" (UID: \"dd76f722-eb61-4676-9456-9a9bb443ef16\") " Feb 16 13:55:26 crc kubenswrapper[4812]: I0216 13:55:26.828622 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd76f722-eb61-4676-9456-9a9bb443ef16-combined-ca-bundle\") pod \"dd76f722-eb61-4676-9456-9a9bb443ef16\" (UID: \"dd76f722-eb61-4676-9456-9a9bb443ef16\") " Feb 16 13:55:26 crc kubenswrapper[4812]: I0216 13:55:26.837132 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd76f722-eb61-4676-9456-9a9bb443ef16-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "dd76f722-eb61-4676-9456-9a9bb443ef16" (UID: "dd76f722-eb61-4676-9456-9a9bb443ef16"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:26 crc kubenswrapper[4812]: I0216 13:55:26.845868 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd76f722-eb61-4676-9456-9a9bb443ef16-kube-api-access-qz7gm" (OuterVolumeSpecName: "kube-api-access-qz7gm") pod "dd76f722-eb61-4676-9456-9a9bb443ef16" (UID: "dd76f722-eb61-4676-9456-9a9bb443ef16"). InnerVolumeSpecName "kube-api-access-qz7gm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:55:26 crc kubenswrapper[4812]: I0216 13:55:26.866388 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd76f722-eb61-4676-9456-9a9bb443ef16-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dd76f722-eb61-4676-9456-9a9bb443ef16" (UID: "dd76f722-eb61-4676-9456-9a9bb443ef16"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:26 crc kubenswrapper[4812]: I0216 13:55:26.933212 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd76f722-eb61-4676-9456-9a9bb443ef16-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:26 crc kubenswrapper[4812]: I0216 13:55:26.933697 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qz7gm\" (UniqueName: \"kubernetes.io/projected/dd76f722-eb61-4676-9456-9a9bb443ef16-kube-api-access-qz7gm\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:26 crc kubenswrapper[4812]: I0216 13:55:26.933800 4812 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dd76f722-eb61-4676-9456-9a9bb443ef16-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:26 crc kubenswrapper[4812]: I0216 13:55:26.976205 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qj2kj" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.178196 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d9d0140e-e353-40a3-8970-5007408f4cb8-etc-machine-id\") pod \"d9d0140e-e353-40a3-8970-5007408f4cb8\" (UID: \"d9d0140e-e353-40a3-8970-5007408f4cb8\") " Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.178745 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9d0140e-e353-40a3-8970-5007408f4cb8-combined-ca-bundle\") pod \"d9d0140e-e353-40a3-8970-5007408f4cb8\" (UID: \"d9d0140e-e353-40a3-8970-5007408f4cb8\") " Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.178857 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9d0140e-e353-40a3-8970-5007408f4cb8-config-data\") pod \"d9d0140e-e353-40a3-8970-5007408f4cb8\" (UID: \"d9d0140e-e353-40a3-8970-5007408f4cb8\") " Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.178899 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d9d0140e-e353-40a3-8970-5007408f4cb8-db-sync-config-data\") pod \"d9d0140e-e353-40a3-8970-5007408f4cb8\" (UID: \"d9d0140e-e353-40a3-8970-5007408f4cb8\") " Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.178978 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qxxk\" (UniqueName: \"kubernetes.io/projected/d9d0140e-e353-40a3-8970-5007408f4cb8-kube-api-access-6qxxk\") pod \"d9d0140e-e353-40a3-8970-5007408f4cb8\" (UID: \"d9d0140e-e353-40a3-8970-5007408f4cb8\") " Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.179050 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9d0140e-e353-40a3-8970-5007408f4cb8-scripts\") pod \"d9d0140e-e353-40a3-8970-5007408f4cb8\" (UID: \"d9d0140e-e353-40a3-8970-5007408f4cb8\") " Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.181632 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9d0140e-e353-40a3-8970-5007408f4cb8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d9d0140e-e353-40a3-8970-5007408f4cb8" (UID: "d9d0140e-e353-40a3-8970-5007408f4cb8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.190506 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9d0140e-e353-40a3-8970-5007408f4cb8-kube-api-access-6qxxk" (OuterVolumeSpecName: "kube-api-access-6qxxk") pod "d9d0140e-e353-40a3-8970-5007408f4cb8" (UID: "d9d0140e-e353-40a3-8970-5007408f4cb8"). InnerVolumeSpecName "kube-api-access-6qxxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.198898 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9d0140e-e353-40a3-8970-5007408f4cb8-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d9d0140e-e353-40a3-8970-5007408f4cb8" (UID: "d9d0140e-e353-40a3-8970-5007408f4cb8"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.199103 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9d0140e-e353-40a3-8970-5007408f4cb8-scripts" (OuterVolumeSpecName: "scripts") pod "d9d0140e-e353-40a3-8970-5007408f4cb8" (UID: "d9d0140e-e353-40a3-8970-5007408f4cb8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.260436 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9d0140e-e353-40a3-8970-5007408f4cb8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d9d0140e-e353-40a3-8970-5007408f4cb8" (UID: "d9d0140e-e353-40a3-8970-5007408f4cb8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.289920 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9d0140e-e353-40a3-8970-5007408f4cb8-config-data" (OuterVolumeSpecName: "config-data") pod "d9d0140e-e353-40a3-8970-5007408f4cb8" (UID: "d9d0140e-e353-40a3-8970-5007408f4cb8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.290302 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qxxk\" (UniqueName: \"kubernetes.io/projected/d9d0140e-e353-40a3-8970-5007408f4cb8-kube-api-access-6qxxk\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.290361 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9d0140e-e353-40a3-8970-5007408f4cb8-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.290376 4812 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d9d0140e-e353-40a3-8970-5007408f4cb8-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.290391 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9d0140e-e353-40a3-8970-5007408f4cb8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.290402 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9d0140e-e353-40a3-8970-5007408f4cb8-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.290414 4812 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d9d0140e-e353-40a3-8970-5007408f4cb8-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.375353 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-869988d995-2jcsq" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.400027 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ad498b5-0999-4dc6-984f-154bd501f036-public-tls-certs\") pod \"1ad498b5-0999-4dc6-984f-154bd501f036\" (UID: \"1ad498b5-0999-4dc6-984f-154bd501f036\") " Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.400131 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ad498b5-0999-4dc6-984f-154bd501f036-ovndb-tls-certs\") pod \"1ad498b5-0999-4dc6-984f-154bd501f036\" (UID: \"1ad498b5-0999-4dc6-984f-154bd501f036\") " Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.400194 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1ad498b5-0999-4dc6-984f-154bd501f036-httpd-config\") pod \"1ad498b5-0999-4dc6-984f-154bd501f036\" (UID: \"1ad498b5-0999-4dc6-984f-154bd501f036\") " Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.400537 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pzll\" (UniqueName: \"kubernetes.io/projected/1ad498b5-0999-4dc6-984f-154bd501f036-kube-api-access-4pzll\") pod \"1ad498b5-0999-4dc6-984f-154bd501f036\" (UID: \"1ad498b5-0999-4dc6-984f-154bd501f036\") " Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.400626 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ad498b5-0999-4dc6-984f-154bd501f036-internal-tls-certs\") pod \"1ad498b5-0999-4dc6-984f-154bd501f036\" (UID: \"1ad498b5-0999-4dc6-984f-154bd501f036\") " Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.400660 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1ad498b5-0999-4dc6-984f-154bd501f036-config\") pod \"1ad498b5-0999-4dc6-984f-154bd501f036\" (UID: \"1ad498b5-0999-4dc6-984f-154bd501f036\") " Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.400788 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ad498b5-0999-4dc6-984f-154bd501f036-combined-ca-bundle\") pod \"1ad498b5-0999-4dc6-984f-154bd501f036\" (UID: \"1ad498b5-0999-4dc6-984f-154bd501f036\") " Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.429226 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ad498b5-0999-4dc6-984f-154bd501f036-kube-api-access-4pzll" (OuterVolumeSpecName: "kube-api-access-4pzll") pod "1ad498b5-0999-4dc6-984f-154bd501f036" (UID: "1ad498b5-0999-4dc6-984f-154bd501f036"). InnerVolumeSpecName "kube-api-access-4pzll". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.463833 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ad498b5-0999-4dc6-984f-154bd501f036-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "1ad498b5-0999-4dc6-984f-154bd501f036" (UID: "1ad498b5-0999-4dc6-984f-154bd501f036"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.502530 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ad498b5-0999-4dc6-984f-154bd501f036-config" (OuterVolumeSpecName: "config") pod "1ad498b5-0999-4dc6-984f-154bd501f036" (UID: "1ad498b5-0999-4dc6-984f-154bd501f036"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.507730 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pzll\" (UniqueName: \"kubernetes.io/projected/1ad498b5-0999-4dc6-984f-154bd501f036-kube-api-access-4pzll\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.507780 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1ad498b5-0999-4dc6-984f-154bd501f036-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.507795 4812 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1ad498b5-0999-4dc6-984f-154bd501f036-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.552114 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ad498b5-0999-4dc6-984f-154bd501f036-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1ad498b5-0999-4dc6-984f-154bd501f036" (UID: "1ad498b5-0999-4dc6-984f-154bd501f036"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.576712 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5687b6b775-mt8dp"] Feb 16 13:55:27 crc kubenswrapper[4812]: W0216 13:55:27.583392 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c203d1a_c01d_4dda_889c_4a09ea0c616c.slice/crio-347cb516abffec5bd8038ffe6719b9c052c0a6b145a6e3de4fc8673b84f7dc65 WatchSource:0}: Error finding container 347cb516abffec5bd8038ffe6719b9c052c0a6b145a6e3de4fc8673b84f7dc65: Status 404 returned error can't find the container with id 347cb516abffec5bd8038ffe6719b9c052c0a6b145a6e3de4fc8673b84f7dc65 Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.586037 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.594063 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ad498b5-0999-4dc6-984f-154bd501f036-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1ad498b5-0999-4dc6-984f-154bd501f036" (UID: "1ad498b5-0999-4dc6-984f-154bd501f036"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.595068 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ad498b5-0999-4dc6-984f-154bd501f036-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1ad498b5-0999-4dc6-984f-154bd501f036" (UID: "1ad498b5-0999-4dc6-984f-154bd501f036"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.610663 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c359d03-e59e-4b85-8599-826a340acc8f-log-httpd\") pod \"4c359d03-e59e-4b85-8599-826a340acc8f\" (UID: \"4c359d03-e59e-4b85-8599-826a340acc8f\") " Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.610813 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c359d03-e59e-4b85-8599-826a340acc8f-run-httpd\") pod \"4c359d03-e59e-4b85-8599-826a340acc8f\" (UID: \"4c359d03-e59e-4b85-8599-826a340acc8f\") " Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.610858 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4c359d03-e59e-4b85-8599-826a340acc8f-sg-core-conf-yaml\") pod \"4c359d03-e59e-4b85-8599-826a340acc8f\" (UID: \"4c359d03-e59e-4b85-8599-826a340acc8f\") " Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.610897 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxsq7\" (UniqueName: \"kubernetes.io/projected/4c359d03-e59e-4b85-8599-826a340acc8f-kube-api-access-fxsq7\") pod \"4c359d03-e59e-4b85-8599-826a340acc8f\" (UID: \"4c359d03-e59e-4b85-8599-826a340acc8f\") " Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.610936 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c359d03-e59e-4b85-8599-826a340acc8f-combined-ca-bundle\") pod \"4c359d03-e59e-4b85-8599-826a340acc8f\" (UID: \"4c359d03-e59e-4b85-8599-826a340acc8f\") " Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.611004 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c359d03-e59e-4b85-8599-826a340acc8f-config-data\") pod \"4c359d03-e59e-4b85-8599-826a340acc8f\" (UID: \"4c359d03-e59e-4b85-8599-826a340acc8f\") " Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.611146 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c359d03-e59e-4b85-8599-826a340acc8f-scripts\") pod \"4c359d03-e59e-4b85-8599-826a340acc8f\" (UID: \"4c359d03-e59e-4b85-8599-826a340acc8f\") " Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.611958 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ad498b5-0999-4dc6-984f-154bd501f036-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.611983 4812 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ad498b5-0999-4dc6-984f-154bd501f036-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.611993 4812 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ad498b5-0999-4dc6-984f-154bd501f036-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.612925 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c359d03-e59e-4b85-8599-826a340acc8f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4c359d03-e59e-4b85-8599-826a340acc8f" (UID: "4c359d03-e59e-4b85-8599-826a340acc8f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.613962 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c359d03-e59e-4b85-8599-826a340acc8f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4c359d03-e59e-4b85-8599-826a340acc8f" (UID: "4c359d03-e59e-4b85-8599-826a340acc8f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.616863 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c359d03-e59e-4b85-8599-826a340acc8f-scripts" (OuterVolumeSpecName: "scripts") pod "4c359d03-e59e-4b85-8599-826a340acc8f" (UID: "4c359d03-e59e-4b85-8599-826a340acc8f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.622105 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c359d03-e59e-4b85-8599-826a340acc8f-config-data" (OuterVolumeSpecName: "config-data") pod "4c359d03-e59e-4b85-8599-826a340acc8f" (UID: "4c359d03-e59e-4b85-8599-826a340acc8f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.626265 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c359d03-e59e-4b85-8599-826a340acc8f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c359d03-e59e-4b85-8599-826a340acc8f" (UID: "4c359d03-e59e-4b85-8599-826a340acc8f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.626380 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ad498b5-0999-4dc6-984f-154bd501f036-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "1ad498b5-0999-4dc6-984f-154bd501f036" (UID: "1ad498b5-0999-4dc6-984f-154bd501f036"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.626688 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c359d03-e59e-4b85-8599-826a340acc8f-kube-api-access-fxsq7" (OuterVolumeSpecName: "kube-api-access-fxsq7") pod "4c359d03-e59e-4b85-8599-826a340acc8f" (UID: "4c359d03-e59e-4b85-8599-826a340acc8f"). InnerVolumeSpecName "kube-api-access-fxsq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.653000 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c359d03-e59e-4b85-8599-826a340acc8f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4c359d03-e59e-4b85-8599-826a340acc8f" (UID: "4c359d03-e59e-4b85-8599-826a340acc8f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.715708 4812 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c359d03-e59e-4b85-8599-826a340acc8f-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.715769 4812 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c359d03-e59e-4b85-8599-826a340acc8f-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.715779 4812 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4c359d03-e59e-4b85-8599-826a340acc8f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.715795 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxsq7\" (UniqueName: \"kubernetes.io/projected/4c359d03-e59e-4b85-8599-826a340acc8f-kube-api-access-fxsq7\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.715806 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c359d03-e59e-4b85-8599-826a340acc8f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.715814 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c359d03-e59e-4b85-8599-826a340acc8f-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.715824 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c359d03-e59e-4b85-8599-826a340acc8f-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.715836 4812 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ad498b5-0999-4dc6-984f-154bd501f036-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.775200 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5687b6b775-mt8dp" event={"ID":"9c203d1a-c01d-4dda-889c-4a09ea0c616c","Type":"ContainerStarted","Data":"347cb516abffec5bd8038ffe6719b9c052c0a6b145a6e3de4fc8673b84f7dc65"} Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.779393 4812 generic.go:334] "Generic (PLEG): container finished" podID="4c359d03-e59e-4b85-8599-826a340acc8f" containerID="c09fb4ab62e415bd3b7ec266d85d1ffde89c642a50f58c1ff161bf2912f3944c" exitCode=2 Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.779630 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.780774 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4c359d03-e59e-4b85-8599-826a340acc8f","Type":"ContainerDied","Data":"c09fb4ab62e415bd3b7ec266d85d1ffde89c642a50f58c1ff161bf2912f3944c"} Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.780947 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4c359d03-e59e-4b85-8599-826a340acc8f","Type":"ContainerDied","Data":"121692d1b15b82347e325fdf9228416664ab4b0645aa2e9f78f09d7111889a80"} Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.780983 4812 scope.go:117] "RemoveContainer" containerID="c09fb4ab62e415bd3b7ec266d85d1ffde89c642a50f58c1ff161bf2912f3944c" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.786128 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-869988d995-2jcsq" event={"ID":"1ad498b5-0999-4dc6-984f-154bd501f036","Type":"ContainerDied","Data":"76f4a3a982cfc68ea557c336c02507ac0c1c08b38c17d8a0aa2b9239a8ee758b"} Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.786247 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qj2kj" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.786248 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-869988d995-2jcsq" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.786402 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-p4bgr" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.820230 4812 scope.go:117] "RemoveContainer" containerID="c09fb4ab62e415bd3b7ec266d85d1ffde89c642a50f58c1ff161bf2912f3944c" Feb 16 13:55:27 crc kubenswrapper[4812]: E0216 13:55:27.824196 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c09fb4ab62e415bd3b7ec266d85d1ffde89c642a50f58c1ff161bf2912f3944c\": container with ID starting with c09fb4ab62e415bd3b7ec266d85d1ffde89c642a50f58c1ff161bf2912f3944c not found: ID does not exist" containerID="c09fb4ab62e415bd3b7ec266d85d1ffde89c642a50f58c1ff161bf2912f3944c" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.824287 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c09fb4ab62e415bd3b7ec266d85d1ffde89c642a50f58c1ff161bf2912f3944c"} err="failed to get container status \"c09fb4ab62e415bd3b7ec266d85d1ffde89c642a50f58c1ff161bf2912f3944c\": rpc error: code = NotFound desc = could not find container \"c09fb4ab62e415bd3b7ec266d85d1ffde89c642a50f58c1ff161bf2912f3944c\": container with ID starting with c09fb4ab62e415bd3b7ec266d85d1ffde89c642a50f58c1ff161bf2912f3944c not found: ID does not exist" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.824335 4812 scope.go:117] "RemoveContainer" containerID="20c4b07745e3cd34844459e63d865503acf1c346a1e17adeaf4dfba5b05a6b3c" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.883622 4812 scope.go:117] "RemoveContainer" containerID="b5194a351de0a3c2a69daa51d8f4faa9ed51ce45912a021a5e907e911f3ece08" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.917882 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.945117 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.960691 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:55:27 crc kubenswrapper[4812]: E0216 13:55:27.962144 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c359d03-e59e-4b85-8599-826a340acc8f" containerName="sg-core" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.962269 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c359d03-e59e-4b85-8599-826a340acc8f" containerName="sg-core" Feb 16 13:55:27 crc kubenswrapper[4812]: E0216 13:55:27.962350 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd76f722-eb61-4676-9456-9a9bb443ef16" containerName="barbican-db-sync" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.962407 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd76f722-eb61-4676-9456-9a9bb443ef16" containerName="barbican-db-sync" Feb 16 13:55:27 crc kubenswrapper[4812]: E0216 13:55:27.962490 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ad498b5-0999-4dc6-984f-154bd501f036" containerName="neutron-api" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.962639 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ad498b5-0999-4dc6-984f-154bd501f036" containerName="neutron-api" Feb 16 13:55:27 crc kubenswrapper[4812]: E0216 13:55:27.962743 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ad498b5-0999-4dc6-984f-154bd501f036" containerName="neutron-httpd" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.962801 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ad498b5-0999-4dc6-984f-154bd501f036" containerName="neutron-httpd" Feb 16 13:55:27 crc kubenswrapper[4812]: E0216 13:55:27.962881 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9d0140e-e353-40a3-8970-5007408f4cb8" containerName="cinder-db-sync" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.962933 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9d0140e-e353-40a3-8970-5007408f4cb8" containerName="cinder-db-sync" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.964194 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd76f722-eb61-4676-9456-9a9bb443ef16" containerName="barbican-db-sync" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.964299 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ad498b5-0999-4dc6-984f-154bd501f036" containerName="neutron-httpd" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.964370 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ad498b5-0999-4dc6-984f-154bd501f036" containerName="neutron-api" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.964428 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c359d03-e59e-4b85-8599-826a340acc8f" containerName="sg-core" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.964506 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9d0140e-e353-40a3-8970-5007408f4cb8" containerName="cinder-db-sync" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.972151 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.976358 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.978995 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.979248 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-869988d995-2jcsq"] Feb 16 13:55:27 crc kubenswrapper[4812]: I0216 13:55:27.996812 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.026179 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-869988d995-2jcsq"] Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.092223 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a-run-httpd\") pod \"ceilometer-0\" (UID: \"3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a\") " pod="openstack/ceilometer-0" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.092328 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a-config-data\") pod \"ceilometer-0\" (UID: \"3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a\") " pod="openstack/ceilometer-0" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.092377 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8rt4\" (UniqueName: \"kubernetes.io/projected/3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a-kube-api-access-l8rt4\") pod \"ceilometer-0\" (UID: \"3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a\") " pod="openstack/ceilometer-0" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.095328 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a\") " pod="openstack/ceilometer-0" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.095516 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a-log-httpd\") pod \"ceilometer-0\" (UID: \"3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a\") " pod="openstack/ceilometer-0" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.095593 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a-scripts\") pod \"ceilometer-0\" (UID: \"3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a\") " pod="openstack/ceilometer-0" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.095690 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a\") " pod="openstack/ceilometer-0" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.217674 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a\") " pod="openstack/ceilometer-0" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.218384 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a-log-httpd\") pod \"ceilometer-0\" (UID: \"3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a\") " pod="openstack/ceilometer-0" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.218479 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a-scripts\") pod \"ceilometer-0\" (UID: \"3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a\") " pod="openstack/ceilometer-0" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.218573 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a\") " pod="openstack/ceilometer-0" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.218989 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a-run-httpd\") pod \"ceilometer-0\" (UID: \"3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a\") " pod="openstack/ceilometer-0" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.219025 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a-config-data\") pod \"ceilometer-0\" (UID: \"3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a\") " pod="openstack/ceilometer-0" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.219058 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8rt4\" (UniqueName: \"kubernetes.io/projected/3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a-kube-api-access-l8rt4\") pod \"ceilometer-0\" (UID: \"3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a\") " pod="openstack/ceilometer-0" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.219826 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a-log-httpd\") pod \"ceilometer-0\" (UID: \"3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a\") " pod="openstack/ceilometer-0" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.246820 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a-run-httpd\") pod \"ceilometer-0\" (UID: \"3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a\") " pod="openstack/ceilometer-0" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.260491 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a\") " pod="openstack/ceilometer-0" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.266417 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a\") " pod="openstack/ceilometer-0" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.270766 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.272191 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a-config-data\") pod \"ceilometer-0\" (UID: \"3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a\") " pod="openstack/ceilometer-0" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.273184 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a-scripts\") pod \"ceilometer-0\" (UID: \"3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a\") " pod="openstack/ceilometer-0" Feb 16 13:55:28 crc kubenswrapper[4812]: E0216 13:55:28.303123 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config-data kube-api-access-l8rt4 scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.358391 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-57b9fd55d-zs44x"] Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.362262 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-57b9fd55d-zs44x" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.393546 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.393914 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.394153 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-qhrnq" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.402813 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8rt4\" (UniqueName: \"kubernetes.io/projected/3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a-kube-api-access-l8rt4\") pod \"ceilometer-0\" (UID: \"3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a\") " pod="openstack/ceilometer-0" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.404021 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-59c64f6659-7rr8v"] Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.429795 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-59c64f6659-7rr8v" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.462971 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-57b9fd55d-zs44x"] Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.470792 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.494336 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b743ee5f-7d4b-4e37-b46f-449f1c1155f9-logs\") pod \"barbican-keystone-listener-57b9fd55d-zs44x\" (UID: \"b743ee5f-7d4b-4e37-b46f-449f1c1155f9\") " pod="openstack/barbican-keystone-listener-57b9fd55d-zs44x" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.494476 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b743ee5f-7d4b-4e37-b46f-449f1c1155f9-combined-ca-bundle\") pod \"barbican-keystone-listener-57b9fd55d-zs44x\" (UID: \"b743ee5f-7d4b-4e37-b46f-449f1c1155f9\") " pod="openstack/barbican-keystone-listener-57b9fd55d-zs44x" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.525600 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvkm8\" (UniqueName: \"kubernetes.io/projected/b743ee5f-7d4b-4e37-b46f-449f1c1155f9-kube-api-access-gvkm8\") pod \"barbican-keystone-listener-57b9fd55d-zs44x\" (UID: \"b743ee5f-7d4b-4e37-b46f-449f1c1155f9\") " pod="openstack/barbican-keystone-listener-57b9fd55d-zs44x" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.525992 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b743ee5f-7d4b-4e37-b46f-449f1c1155f9-config-data-custom\") pod \"barbican-keystone-listener-57b9fd55d-zs44x\" (UID: \"b743ee5f-7d4b-4e37-b46f-449f1c1155f9\") " pod="openstack/barbican-keystone-listener-57b9fd55d-zs44x" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.526358 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b743ee5f-7d4b-4e37-b46f-449f1c1155f9-config-data\") pod \"barbican-keystone-listener-57b9fd55d-zs44x\" (UID: \"b743ee5f-7d4b-4e37-b46f-449f1c1155f9\") " pod="openstack/barbican-keystone-listener-57b9fd55d-zs44x" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.546711 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-59c64f6659-7rr8v"] Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.612374 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-g98xl"] Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.617543 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-g98xl" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.631962 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgvcs\" (UniqueName: \"kubernetes.io/projected/1e7c7a64-8967-4ee4-af38-c6d384fbd722-kube-api-access-lgvcs\") pod \"barbican-worker-59c64f6659-7rr8v\" (UID: \"1e7c7a64-8967-4ee4-af38-c6d384fbd722\") " pod="openstack/barbican-worker-59c64f6659-7rr8v" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.632133 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b743ee5f-7d4b-4e37-b46f-449f1c1155f9-config-data\") pod \"barbican-keystone-listener-57b9fd55d-zs44x\" (UID: \"b743ee5f-7d4b-4e37-b46f-449f1c1155f9\") " pod="openstack/barbican-keystone-listener-57b9fd55d-zs44x" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.632242 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e7c7a64-8967-4ee4-af38-c6d384fbd722-combined-ca-bundle\") pod \"barbican-worker-59c64f6659-7rr8v\" (UID: \"1e7c7a64-8967-4ee4-af38-c6d384fbd722\") " pod="openstack/barbican-worker-59c64f6659-7rr8v" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.632311 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b743ee5f-7d4b-4e37-b46f-449f1c1155f9-logs\") pod \"barbican-keystone-listener-57b9fd55d-zs44x\" (UID: \"b743ee5f-7d4b-4e37-b46f-449f1c1155f9\") " pod="openstack/barbican-keystone-listener-57b9fd55d-zs44x" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.632335 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b743ee5f-7d4b-4e37-b46f-449f1c1155f9-combined-ca-bundle\") pod \"barbican-keystone-listener-57b9fd55d-zs44x\" (UID: \"b743ee5f-7d4b-4e37-b46f-449f1c1155f9\") " pod="openstack/barbican-keystone-listener-57b9fd55d-zs44x" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.632386 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvkm8\" (UniqueName: \"kubernetes.io/projected/b743ee5f-7d4b-4e37-b46f-449f1c1155f9-kube-api-access-gvkm8\") pod \"barbican-keystone-listener-57b9fd55d-zs44x\" (UID: \"b743ee5f-7d4b-4e37-b46f-449f1c1155f9\") " pod="openstack/barbican-keystone-listener-57b9fd55d-zs44x" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.632412 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1e7c7a64-8967-4ee4-af38-c6d384fbd722-config-data-custom\") pod \"barbican-worker-59c64f6659-7rr8v\" (UID: \"1e7c7a64-8967-4ee4-af38-c6d384fbd722\") " pod="openstack/barbican-worker-59c64f6659-7rr8v" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.632498 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e7c7a64-8967-4ee4-af38-c6d384fbd722-config-data\") pod \"barbican-worker-59c64f6659-7rr8v\" (UID: \"1e7c7a64-8967-4ee4-af38-c6d384fbd722\") " pod="openstack/barbican-worker-59c64f6659-7rr8v" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.632526 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e7c7a64-8967-4ee4-af38-c6d384fbd722-logs\") pod \"barbican-worker-59c64f6659-7rr8v\" (UID: \"1e7c7a64-8967-4ee4-af38-c6d384fbd722\") " pod="openstack/barbican-worker-59c64f6659-7rr8v" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.632622 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b743ee5f-7d4b-4e37-b46f-449f1c1155f9-config-data-custom\") pod \"barbican-keystone-listener-57b9fd55d-zs44x\" (UID: \"b743ee5f-7d4b-4e37-b46f-449f1c1155f9\") " pod="openstack/barbican-keystone-listener-57b9fd55d-zs44x" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.634888 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b743ee5f-7d4b-4e37-b46f-449f1c1155f9-logs\") pod \"barbican-keystone-listener-57b9fd55d-zs44x\" (UID: \"b743ee5f-7d4b-4e37-b46f-449f1c1155f9\") " pod="openstack/barbican-keystone-listener-57b9fd55d-zs44x" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.674975 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-g98xl"] Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.713597 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b743ee5f-7d4b-4e37-b46f-449f1c1155f9-config-data\") pod \"barbican-keystone-listener-57b9fd55d-zs44x\" (UID: \"b743ee5f-7d4b-4e37-b46f-449f1c1155f9\") " pod="openstack/barbican-keystone-listener-57b9fd55d-zs44x" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.714191 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b743ee5f-7d4b-4e37-b46f-449f1c1155f9-config-data-custom\") pod \"barbican-keystone-listener-57b9fd55d-zs44x\" (UID: \"b743ee5f-7d4b-4e37-b46f-449f1c1155f9\") " pod="openstack/barbican-keystone-listener-57b9fd55d-zs44x" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.714201 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvkm8\" (UniqueName: \"kubernetes.io/projected/b743ee5f-7d4b-4e37-b46f-449f1c1155f9-kube-api-access-gvkm8\") pod \"barbican-keystone-listener-57b9fd55d-zs44x\" (UID: \"b743ee5f-7d4b-4e37-b46f-449f1c1155f9\") " pod="openstack/barbican-keystone-listener-57b9fd55d-zs44x" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.741581 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.745539 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b743ee5f-7d4b-4e37-b46f-449f1c1155f9-combined-ca-bundle\") pod \"barbican-keystone-listener-57b9fd55d-zs44x\" (UID: \"b743ee5f-7d4b-4e37-b46f-449f1c1155f9\") " pod="openstack/barbican-keystone-listener-57b9fd55d-zs44x" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.753754 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f779db3-6985-40e2-ba00-85650a832066-dns-svc\") pod \"dnsmasq-dns-85ff748b95-g98xl\" (UID: \"6f779db3-6985-40e2-ba00-85650a832066\") " pod="openstack/dnsmasq-dns-85ff748b95-g98xl" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.754059 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f779db3-6985-40e2-ba00-85650a832066-config\") pod \"dnsmasq-dns-85ff748b95-g98xl\" (UID: \"6f779db3-6985-40e2-ba00-85650a832066\") " pod="openstack/dnsmasq-dns-85ff748b95-g98xl" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.754118 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e7c7a64-8967-4ee4-af38-c6d384fbd722-combined-ca-bundle\") pod \"barbican-worker-59c64f6659-7rr8v\" (UID: \"1e7c7a64-8967-4ee4-af38-c6d384fbd722\") " pod="openstack/barbican-worker-59c64f6659-7rr8v" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.754146 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f779db3-6985-40e2-ba00-85650a832066-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-g98xl\" (UID: \"6f779db3-6985-40e2-ba00-85650a832066\") " pod="openstack/dnsmasq-dns-85ff748b95-g98xl" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.754334 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1e7c7a64-8967-4ee4-af38-c6d384fbd722-config-data-custom\") pod \"barbican-worker-59c64f6659-7rr8v\" (UID: \"1e7c7a64-8967-4ee4-af38-c6d384fbd722\") " pod="openstack/barbican-worker-59c64f6659-7rr8v" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.754466 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f779db3-6985-40e2-ba00-85650a832066-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-g98xl\" (UID: \"6f779db3-6985-40e2-ba00-85650a832066\") " pod="openstack/dnsmasq-dns-85ff748b95-g98xl" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.754516 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e7c7a64-8967-4ee4-af38-c6d384fbd722-config-data\") pod \"barbican-worker-59c64f6659-7rr8v\" (UID: \"1e7c7a64-8967-4ee4-af38-c6d384fbd722\") " pod="openstack/barbican-worker-59c64f6659-7rr8v" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.754555 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e7c7a64-8967-4ee4-af38-c6d384fbd722-logs\") pod \"barbican-worker-59c64f6659-7rr8v\" (UID: \"1e7c7a64-8967-4ee4-af38-c6d384fbd722\") " pod="openstack/barbican-worker-59c64f6659-7rr8v" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.754608 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkdbt\" (UniqueName: \"kubernetes.io/projected/6f779db3-6985-40e2-ba00-85650a832066-kube-api-access-kkdbt\") pod \"dnsmasq-dns-85ff748b95-g98xl\" (UID: \"6f779db3-6985-40e2-ba00-85650a832066\") " pod="openstack/dnsmasq-dns-85ff748b95-g98xl" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.754777 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgvcs\" (UniqueName: \"kubernetes.io/projected/1e7c7a64-8967-4ee4-af38-c6d384fbd722-kube-api-access-lgvcs\") pod \"barbican-worker-59c64f6659-7rr8v\" (UID: \"1e7c7a64-8967-4ee4-af38-c6d384fbd722\") " pod="openstack/barbican-worker-59c64f6659-7rr8v" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.754847 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f779db3-6985-40e2-ba00-85650a832066-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-g98xl\" (UID: \"6f779db3-6985-40e2-ba00-85650a832066\") " pod="openstack/dnsmasq-dns-85ff748b95-g98xl" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.757022 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.758928 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e7c7a64-8967-4ee4-af38-c6d384fbd722-logs\") pod \"barbican-worker-59c64f6659-7rr8v\" (UID: \"1e7c7a64-8967-4ee4-af38-c6d384fbd722\") " pod="openstack/barbican-worker-59c64f6659-7rr8v" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.770394 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.770781 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.770982 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.771225 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-fxw8m" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.771305 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1e7c7a64-8967-4ee4-af38-c6d384fbd722-config-data-custom\") pod \"barbican-worker-59c64f6659-7rr8v\" (UID: \"1e7c7a64-8967-4ee4-af38-c6d384fbd722\") " pod="openstack/barbican-worker-59c64f6659-7rr8v" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.785873 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e7c7a64-8967-4ee4-af38-c6d384fbd722-config-data\") pod \"barbican-worker-59c64f6659-7rr8v\" (UID: \"1e7c7a64-8967-4ee4-af38-c6d384fbd722\") " pod="openstack/barbican-worker-59c64f6659-7rr8v" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.791731 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.796470 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e7c7a64-8967-4ee4-af38-c6d384fbd722-combined-ca-bundle\") pod \"barbican-worker-59c64f6659-7rr8v\" (UID: \"1e7c7a64-8967-4ee4-af38-c6d384fbd722\") " pod="openstack/barbican-worker-59c64f6659-7rr8v" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.849286 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgvcs\" (UniqueName: \"kubernetes.io/projected/1e7c7a64-8967-4ee4-af38-c6d384fbd722-kube-api-access-lgvcs\") pod \"barbican-worker-59c64f6659-7rr8v\" (UID: \"1e7c7a64-8967-4ee4-af38-c6d384fbd722\") " pod="openstack/barbican-worker-59c64f6659-7rr8v" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.860631 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f2f2975-b4e8-4a6e-8a42-4eaeca03595a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4f2f2975-b4e8-4a6e-8a42-4eaeca03595a\") " pod="openstack/cinder-scheduler-0" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.860886 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-57b9fd55d-zs44x" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.861252 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpz8g\" (UniqueName: \"kubernetes.io/projected/4f2f2975-b4e8-4a6e-8a42-4eaeca03595a-kube-api-access-tpz8g\") pod \"cinder-scheduler-0\" (UID: \"4f2f2975-b4e8-4a6e-8a42-4eaeca03595a\") " pod="openstack/cinder-scheduler-0" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.861482 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f779db3-6985-40e2-ba00-85650a832066-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-g98xl\" (UID: \"6f779db3-6985-40e2-ba00-85650a832066\") " pod="openstack/dnsmasq-dns-85ff748b95-g98xl" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.861627 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4f2f2975-b4e8-4a6e-8a42-4eaeca03595a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4f2f2975-b4e8-4a6e-8a42-4eaeca03595a\") " pod="openstack/cinder-scheduler-0" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.861661 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4f2f2975-b4e8-4a6e-8a42-4eaeca03595a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4f2f2975-b4e8-4a6e-8a42-4eaeca03595a\") " pod="openstack/cinder-scheduler-0" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.863033 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f779db3-6985-40e2-ba00-85650a832066-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-g98xl\" (UID: \"6f779db3-6985-40e2-ba00-85650a832066\") " pod="openstack/dnsmasq-dns-85ff748b95-g98xl" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.864104 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkdbt\" (UniqueName: \"kubernetes.io/projected/6f779db3-6985-40e2-ba00-85650a832066-kube-api-access-kkdbt\") pod \"dnsmasq-dns-85ff748b95-g98xl\" (UID: \"6f779db3-6985-40e2-ba00-85650a832066\") " pod="openstack/dnsmasq-dns-85ff748b95-g98xl" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.872364 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f2f2975-b4e8-4a6e-8a42-4eaeca03595a-scripts\") pod \"cinder-scheduler-0\" (UID: \"4f2f2975-b4e8-4a6e-8a42-4eaeca03595a\") " pod="openstack/cinder-scheduler-0" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.872680 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f779db3-6985-40e2-ba00-85650a832066-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-g98xl\" (UID: \"6f779db3-6985-40e2-ba00-85650a832066\") " pod="openstack/dnsmasq-dns-85ff748b95-g98xl" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.872774 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f2f2975-b4e8-4a6e-8a42-4eaeca03595a-config-data\") pod \"cinder-scheduler-0\" (UID: \"4f2f2975-b4e8-4a6e-8a42-4eaeca03595a\") " pod="openstack/cinder-scheduler-0" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.872889 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f779db3-6985-40e2-ba00-85650a832066-dns-svc\") pod \"dnsmasq-dns-85ff748b95-g98xl\" (UID: \"6f779db3-6985-40e2-ba00-85650a832066\") " pod="openstack/dnsmasq-dns-85ff748b95-g98xl" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.873158 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f779db3-6985-40e2-ba00-85650a832066-config\") pod \"dnsmasq-dns-85ff748b95-g98xl\" (UID: \"6f779db3-6985-40e2-ba00-85650a832066\") " pod="openstack/dnsmasq-dns-85ff748b95-g98xl" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.873212 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f779db3-6985-40e2-ba00-85650a832066-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-g98xl\" (UID: \"6f779db3-6985-40e2-ba00-85650a832066\") " pod="openstack/dnsmasq-dns-85ff748b95-g98xl" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.874720 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f779db3-6985-40e2-ba00-85650a832066-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-g98xl\" (UID: \"6f779db3-6985-40e2-ba00-85650a832066\") " pod="openstack/dnsmasq-dns-85ff748b95-g98xl" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.881987 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f779db3-6985-40e2-ba00-85650a832066-dns-svc\") pod \"dnsmasq-dns-85ff748b95-g98xl\" (UID: \"6f779db3-6985-40e2-ba00-85650a832066\") " pod="openstack/dnsmasq-dns-85ff748b95-g98xl" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.882105 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f779db3-6985-40e2-ba00-85650a832066-config\") pod \"dnsmasq-dns-85ff748b95-g98xl\" (UID: \"6f779db3-6985-40e2-ba00-85650a832066\") " pod="openstack/dnsmasq-dns-85ff748b95-g98xl" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.892854 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f779db3-6985-40e2-ba00-85650a832066-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-g98xl\" (UID: \"6f779db3-6985-40e2-ba00-85650a832066\") " pod="openstack/dnsmasq-dns-85ff748b95-g98xl" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.892969 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-g98xl"] Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.899864 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5687b6b775-mt8dp" event={"ID":"9c203d1a-c01d-4dda-889c-4a09ea0c616c","Type":"ContainerStarted","Data":"b660aed0bb386ffe603eb178ec72a9b457d14a82fc9f1bd2a6f39fc2ea54f929"} Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.899936 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5687b6b775-mt8dp" event={"ID":"9c203d1a-c01d-4dda-889c-4a09ea0c616c","Type":"ContainerStarted","Data":"4c10deb8a3d068c29135ed3c68ed4b687e4e5234ae4909f93abb5852eaabb267"} Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.901723 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5687b6b775-mt8dp" Feb 16 13:55:28 crc kubenswrapper[4812]: E0216 13:55:28.912057 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-kkdbt], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-85ff748b95-g98xl" podUID="6f779db3-6985-40e2-ba00-85650a832066" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.913981 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.917597 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-59c64f6659-7rr8v" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.921453 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkdbt\" (UniqueName: \"kubernetes.io/projected/6f779db3-6985-40e2-ba00-85650a832066-kube-api-access-kkdbt\") pod \"dnsmasq-dns-85ff748b95-g98xl\" (UID: \"6f779db3-6985-40e2-ba00-85650a832066\") " pod="openstack/dnsmasq-dns-85ff748b95-g98xl" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.927796 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-775b4c67cd-9n6f8"] Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.930378 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-775b4c67cd-9n6f8" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.934303 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.976044 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f2f2975-b4e8-4a6e-8a42-4eaeca03595a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4f2f2975-b4e8-4a6e-8a42-4eaeca03595a\") " pod="openstack/cinder-scheduler-0" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.976156 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpz8g\" (UniqueName: \"kubernetes.io/projected/4f2f2975-b4e8-4a6e-8a42-4eaeca03595a-kube-api-access-tpz8g\") pod \"cinder-scheduler-0\" (UID: \"4f2f2975-b4e8-4a6e-8a42-4eaeca03595a\") " pod="openstack/cinder-scheduler-0" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.976282 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4f2f2975-b4e8-4a6e-8a42-4eaeca03595a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4f2f2975-b4e8-4a6e-8a42-4eaeca03595a\") " pod="openstack/cinder-scheduler-0" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.976318 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4f2f2975-b4e8-4a6e-8a42-4eaeca03595a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4f2f2975-b4e8-4a6e-8a42-4eaeca03595a\") " pod="openstack/cinder-scheduler-0" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.976503 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f2f2975-b4e8-4a6e-8a42-4eaeca03595a-scripts\") pod \"cinder-scheduler-0\" (UID: \"4f2f2975-b4e8-4a6e-8a42-4eaeca03595a\") " pod="openstack/cinder-scheduler-0" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.976582 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f2f2975-b4e8-4a6e-8a42-4eaeca03595a-config-data\") pod \"cinder-scheduler-0\" (UID: \"4f2f2975-b4e8-4a6e-8a42-4eaeca03595a\") " pod="openstack/cinder-scheduler-0" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.982639 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.983661 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4f2f2975-b4e8-4a6e-8a42-4eaeca03595a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4f2f2975-b4e8-4a6e-8a42-4eaeca03595a\") " pod="openstack/cinder-scheduler-0" Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.989084 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-cvglp"] Feb 16 13:55:28 crc kubenswrapper[4812]: I0216 13:55:28.992911 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-cvglp" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.009388 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-775b4c67cd-9n6f8"] Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.014719 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4f2f2975-b4e8-4a6e-8a42-4eaeca03595a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4f2f2975-b4e8-4a6e-8a42-4eaeca03595a\") " pod="openstack/cinder-scheduler-0" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.021470 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f2f2975-b4e8-4a6e-8a42-4eaeca03595a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4f2f2975-b4e8-4a6e-8a42-4eaeca03595a\") " pod="openstack/cinder-scheduler-0" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.022385 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f2f2975-b4e8-4a6e-8a42-4eaeca03595a-scripts\") pod \"cinder-scheduler-0\" (UID: \"4f2f2975-b4e8-4a6e-8a42-4eaeca03595a\") " pod="openstack/cinder-scheduler-0" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.029600 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f2f2975-b4e8-4a6e-8a42-4eaeca03595a-config-data\") pod \"cinder-scheduler-0\" (UID: \"4f2f2975-b4e8-4a6e-8a42-4eaeca03595a\") " pod="openstack/cinder-scheduler-0" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.049600 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-cvglp"] Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.054140 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpz8g\" (UniqueName: \"kubernetes.io/projected/4f2f2975-b4e8-4a6e-8a42-4eaeca03595a-kube-api-access-tpz8g\") pod \"cinder-scheduler-0\" (UID: \"4f2f2975-b4e8-4a6e-8a42-4eaeca03595a\") " pod="openstack/cinder-scheduler-0" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.070611 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5687b6b775-mt8dp" podStartSLOduration=13.070574935 podStartE2EDuration="13.070574935s" podCreationTimestamp="2026-02-16 13:55:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:55:28.936904139 +0000 UTC m=+1418.001234850" watchObservedRunningTime="2026-02-16 13:55:29.070574935 +0000 UTC m=+1418.134905636" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.082536 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a-run-httpd\") pod \"3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a\" (UID: \"3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a\") " Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.082767 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a-sg-core-conf-yaml\") pod \"3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a\" (UID: \"3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a\") " Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.082869 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a-log-httpd\") pod \"3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a\" (UID: \"3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a\") " Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.082942 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a-config-data\") pod \"3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a\" (UID: \"3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a\") " Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.083051 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a-combined-ca-bundle\") pod \"3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a\" (UID: \"3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a\") " Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.083087 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a-scripts\") pod \"3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a\" (UID: \"3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a\") " Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.083173 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8rt4\" (UniqueName: \"kubernetes.io/projected/3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a-kube-api-access-l8rt4\") pod \"3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a\" (UID: \"3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a\") " Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.083614 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a81f17cc-32a6-4089-bf61-ea63d46b7f60-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-cvglp\" (UID: \"a81f17cc-32a6-4089-bf61-ea63d46b7f60\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cvglp" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.083649 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdwgv\" (UniqueName: \"kubernetes.io/projected/a81f17cc-32a6-4089-bf61-ea63d46b7f60-kube-api-access-jdwgv\") pod \"dnsmasq-dns-5c9776ccc5-cvglp\" (UID: \"a81f17cc-32a6-4089-bf61-ea63d46b7f60\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cvglp" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.083711 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sprc8\" (UniqueName: \"kubernetes.io/projected/938fb099-5861-4d4f-8105-bcd26cbbcabd-kube-api-access-sprc8\") pod \"barbican-api-775b4c67cd-9n6f8\" (UID: \"938fb099-5861-4d4f-8105-bcd26cbbcabd\") " pod="openstack/barbican-api-775b4c67cd-9n6f8" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.083739 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/938fb099-5861-4d4f-8105-bcd26cbbcabd-config-data\") pod \"barbican-api-775b4c67cd-9n6f8\" (UID: \"938fb099-5861-4d4f-8105-bcd26cbbcabd\") " pod="openstack/barbican-api-775b4c67cd-9n6f8" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.083807 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/938fb099-5861-4d4f-8105-bcd26cbbcabd-config-data-custom\") pod \"barbican-api-775b4c67cd-9n6f8\" (UID: \"938fb099-5861-4d4f-8105-bcd26cbbcabd\") " pod="openstack/barbican-api-775b4c67cd-9n6f8" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.083829 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a81f17cc-32a6-4089-bf61-ea63d46b7f60-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-cvglp\" (UID: \"a81f17cc-32a6-4089-bf61-ea63d46b7f60\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cvglp" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.083867 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a81f17cc-32a6-4089-bf61-ea63d46b7f60-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-cvglp\" (UID: \"a81f17cc-32a6-4089-bf61-ea63d46b7f60\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cvglp" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.083972 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/938fb099-5861-4d4f-8105-bcd26cbbcabd-combined-ca-bundle\") pod \"barbican-api-775b4c67cd-9n6f8\" (UID: \"938fb099-5861-4d4f-8105-bcd26cbbcabd\") " pod="openstack/barbican-api-775b4c67cd-9n6f8" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.084022 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a81f17cc-32a6-4089-bf61-ea63d46b7f60-config\") pod \"dnsmasq-dns-5c9776ccc5-cvglp\" (UID: \"a81f17cc-32a6-4089-bf61-ea63d46b7f60\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cvglp" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.084041 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a81f17cc-32a6-4089-bf61-ea63d46b7f60-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-cvglp\" (UID: \"a81f17cc-32a6-4089-bf61-ea63d46b7f60\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cvglp" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.084067 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/938fb099-5861-4d4f-8105-bcd26cbbcabd-logs\") pod \"barbican-api-775b4c67cd-9n6f8\" (UID: \"938fb099-5861-4d4f-8105-bcd26cbbcabd\") " pod="openstack/barbican-api-775b4c67cd-9n6f8" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.088945 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a" (UID: "3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.091220 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a-kube-api-access-l8rt4" (OuterVolumeSpecName: "kube-api-access-l8rt4") pod "3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a" (UID: "3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a"). InnerVolumeSpecName "kube-api-access-l8rt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.091650 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a" (UID: "3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.091861 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a" (UID: "3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.095257 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a-scripts" (OuterVolumeSpecName: "scripts") pod "3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a" (UID: "3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.095698 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a-config-data" (OuterVolumeSpecName: "config-data") pod "3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a" (UID: "3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.110340 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a" (UID: "3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.160172 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.188518 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/938fb099-5861-4d4f-8105-bcd26cbbcabd-config-data-custom\") pod \"barbican-api-775b4c67cd-9n6f8\" (UID: \"938fb099-5861-4d4f-8105-bcd26cbbcabd\") " pod="openstack/barbican-api-775b4c67cd-9n6f8" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.188644 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a81f17cc-32a6-4089-bf61-ea63d46b7f60-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-cvglp\" (UID: \"a81f17cc-32a6-4089-bf61-ea63d46b7f60\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cvglp" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.188825 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a81f17cc-32a6-4089-bf61-ea63d46b7f60-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-cvglp\" (UID: \"a81f17cc-32a6-4089-bf61-ea63d46b7f60\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cvglp" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.189303 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/938fb099-5861-4d4f-8105-bcd26cbbcabd-combined-ca-bundle\") pod \"barbican-api-775b4c67cd-9n6f8\" (UID: \"938fb099-5861-4d4f-8105-bcd26cbbcabd\") " pod="openstack/barbican-api-775b4c67cd-9n6f8" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.189393 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a81f17cc-32a6-4089-bf61-ea63d46b7f60-config\") pod \"dnsmasq-dns-5c9776ccc5-cvglp\" (UID: \"a81f17cc-32a6-4089-bf61-ea63d46b7f60\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cvglp" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.189493 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a81f17cc-32a6-4089-bf61-ea63d46b7f60-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-cvglp\" (UID: \"a81f17cc-32a6-4089-bf61-ea63d46b7f60\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cvglp" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.190662 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/938fb099-5861-4d4f-8105-bcd26cbbcabd-logs\") pod \"barbican-api-775b4c67cd-9n6f8\" (UID: \"938fb099-5861-4d4f-8105-bcd26cbbcabd\") " pod="openstack/barbican-api-775b4c67cd-9n6f8" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.190801 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a81f17cc-32a6-4089-bf61-ea63d46b7f60-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-cvglp\" (UID: \"a81f17cc-32a6-4089-bf61-ea63d46b7f60\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cvglp" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.190850 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdwgv\" (UniqueName: \"kubernetes.io/projected/a81f17cc-32a6-4089-bf61-ea63d46b7f60-kube-api-access-jdwgv\") pod \"dnsmasq-dns-5c9776ccc5-cvglp\" (UID: \"a81f17cc-32a6-4089-bf61-ea63d46b7f60\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cvglp" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.190981 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sprc8\" (UniqueName: \"kubernetes.io/projected/938fb099-5861-4d4f-8105-bcd26cbbcabd-kube-api-access-sprc8\") pod \"barbican-api-775b4c67cd-9n6f8\" (UID: \"938fb099-5861-4d4f-8105-bcd26cbbcabd\") " pod="openstack/barbican-api-775b4c67cd-9n6f8" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.191040 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/938fb099-5861-4d4f-8105-bcd26cbbcabd-config-data\") pod \"barbican-api-775b4c67cd-9n6f8\" (UID: \"938fb099-5861-4d4f-8105-bcd26cbbcabd\") " pod="openstack/barbican-api-775b4c67cd-9n6f8" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.191274 4812 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.191305 4812 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.191318 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.191339 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.191352 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.191367 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8rt4\" (UniqueName: \"kubernetes.io/projected/3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a-kube-api-access-l8rt4\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.191382 4812 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.194886 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a81f17cc-32a6-4089-bf61-ea63d46b7f60-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-cvglp\" (UID: \"a81f17cc-32a6-4089-bf61-ea63d46b7f60\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cvglp" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.195132 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/938fb099-5861-4d4f-8105-bcd26cbbcabd-logs\") pod \"barbican-api-775b4c67cd-9n6f8\" (UID: \"938fb099-5861-4d4f-8105-bcd26cbbcabd\") " pod="openstack/barbican-api-775b4c67cd-9n6f8" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.196192 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a81f17cc-32a6-4089-bf61-ea63d46b7f60-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-cvglp\" (UID: \"a81f17cc-32a6-4089-bf61-ea63d46b7f60\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cvglp" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.198987 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a81f17cc-32a6-4089-bf61-ea63d46b7f60-config\") pod \"dnsmasq-dns-5c9776ccc5-cvglp\" (UID: \"a81f17cc-32a6-4089-bf61-ea63d46b7f60\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cvglp" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.200215 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.209932 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/938fb099-5861-4d4f-8105-bcd26cbbcabd-config-data\") pod \"barbican-api-775b4c67cd-9n6f8\" (UID: \"938fb099-5861-4d4f-8105-bcd26cbbcabd\") " pod="openstack/barbican-api-775b4c67cd-9n6f8" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.214233 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a81f17cc-32a6-4089-bf61-ea63d46b7f60-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-cvglp\" (UID: \"a81f17cc-32a6-4089-bf61-ea63d46b7f60\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cvglp" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.218955 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/938fb099-5861-4d4f-8105-bcd26cbbcabd-config-data-custom\") pod \"barbican-api-775b4c67cd-9n6f8\" (UID: \"938fb099-5861-4d4f-8105-bcd26cbbcabd\") " pod="openstack/barbican-api-775b4c67cd-9n6f8" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.230330 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/938fb099-5861-4d4f-8105-bcd26cbbcabd-combined-ca-bundle\") pod \"barbican-api-775b4c67cd-9n6f8\" (UID: \"938fb099-5861-4d4f-8105-bcd26cbbcabd\") " pod="openstack/barbican-api-775b4c67cd-9n6f8" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.233377 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.238823 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a81f17cc-32a6-4089-bf61-ea63d46b7f60-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-cvglp\" (UID: \"a81f17cc-32a6-4089-bf61-ea63d46b7f60\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cvglp" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.239531 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdwgv\" (UniqueName: \"kubernetes.io/projected/a81f17cc-32a6-4089-bf61-ea63d46b7f60-kube-api-access-jdwgv\") pod \"dnsmasq-dns-5c9776ccc5-cvglp\" (UID: \"a81f17cc-32a6-4089-bf61-ea63d46b7f60\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cvglp" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.241293 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.250750 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sprc8\" (UniqueName: \"kubernetes.io/projected/938fb099-5861-4d4f-8105-bcd26cbbcabd-kube-api-access-sprc8\") pod \"barbican-api-775b4c67cd-9n6f8\" (UID: \"938fb099-5861-4d4f-8105-bcd26cbbcabd\") " pod="openstack/barbican-api-775b4c67cd-9n6f8" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.272211 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.308608 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/310d4179-66e9-4979-984a-3844494fe6ab-config-data\") pod \"cinder-api-0\" (UID: \"310d4179-66e9-4979-984a-3844494fe6ab\") " pod="openstack/cinder-api-0" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.311682 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbxw9\" (UniqueName: \"kubernetes.io/projected/310d4179-66e9-4979-984a-3844494fe6ab-kube-api-access-gbxw9\") pod \"cinder-api-0\" (UID: \"310d4179-66e9-4979-984a-3844494fe6ab\") " pod="openstack/cinder-api-0" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.311770 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/310d4179-66e9-4979-984a-3844494fe6ab-config-data-custom\") pod \"cinder-api-0\" (UID: \"310d4179-66e9-4979-984a-3844494fe6ab\") " pod="openstack/cinder-api-0" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.311902 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/310d4179-66e9-4979-984a-3844494fe6ab-logs\") pod \"cinder-api-0\" (UID: \"310d4179-66e9-4979-984a-3844494fe6ab\") " pod="openstack/cinder-api-0" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.311993 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/310d4179-66e9-4979-984a-3844494fe6ab-etc-machine-id\") pod \"cinder-api-0\" (UID: \"310d4179-66e9-4979-984a-3844494fe6ab\") " pod="openstack/cinder-api-0" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.312486 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/310d4179-66e9-4979-984a-3844494fe6ab-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"310d4179-66e9-4979-984a-3844494fe6ab\") " pod="openstack/cinder-api-0" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.312571 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/310d4179-66e9-4979-984a-3844494fe6ab-scripts\") pod \"cinder-api-0\" (UID: \"310d4179-66e9-4979-984a-3844494fe6ab\") " pod="openstack/cinder-api-0" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.334322 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-775b4c67cd-9n6f8" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.379381 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-cvglp" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.419594 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/310d4179-66e9-4979-984a-3844494fe6ab-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"310d4179-66e9-4979-984a-3844494fe6ab\") " pod="openstack/cinder-api-0" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.419706 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/310d4179-66e9-4979-984a-3844494fe6ab-scripts\") pod \"cinder-api-0\" (UID: \"310d4179-66e9-4979-984a-3844494fe6ab\") " pod="openstack/cinder-api-0" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.419947 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/310d4179-66e9-4979-984a-3844494fe6ab-config-data\") pod \"cinder-api-0\" (UID: \"310d4179-66e9-4979-984a-3844494fe6ab\") " pod="openstack/cinder-api-0" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.420020 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbxw9\" (UniqueName: \"kubernetes.io/projected/310d4179-66e9-4979-984a-3844494fe6ab-kube-api-access-gbxw9\") pod \"cinder-api-0\" (UID: \"310d4179-66e9-4979-984a-3844494fe6ab\") " pod="openstack/cinder-api-0" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.420056 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/310d4179-66e9-4979-984a-3844494fe6ab-config-data-custom\") pod \"cinder-api-0\" (UID: \"310d4179-66e9-4979-984a-3844494fe6ab\") " pod="openstack/cinder-api-0" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.420139 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/310d4179-66e9-4979-984a-3844494fe6ab-logs\") pod \"cinder-api-0\" (UID: \"310d4179-66e9-4979-984a-3844494fe6ab\") " pod="openstack/cinder-api-0" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.420201 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/310d4179-66e9-4979-984a-3844494fe6ab-etc-machine-id\") pod \"cinder-api-0\" (UID: \"310d4179-66e9-4979-984a-3844494fe6ab\") " pod="openstack/cinder-api-0" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.420787 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/310d4179-66e9-4979-984a-3844494fe6ab-etc-machine-id\") pod \"cinder-api-0\" (UID: \"310d4179-66e9-4979-984a-3844494fe6ab\") " pod="openstack/cinder-api-0" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.426495 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/310d4179-66e9-4979-984a-3844494fe6ab-logs\") pod \"cinder-api-0\" (UID: \"310d4179-66e9-4979-984a-3844494fe6ab\") " pod="openstack/cinder-api-0" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.432736 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/310d4179-66e9-4979-984a-3844494fe6ab-scripts\") pod \"cinder-api-0\" (UID: \"310d4179-66e9-4979-984a-3844494fe6ab\") " pod="openstack/cinder-api-0" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.433288 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/310d4179-66e9-4979-984a-3844494fe6ab-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"310d4179-66e9-4979-984a-3844494fe6ab\") " pod="openstack/cinder-api-0" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.434317 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/310d4179-66e9-4979-984a-3844494fe6ab-config-data\") pod \"cinder-api-0\" (UID: \"310d4179-66e9-4979-984a-3844494fe6ab\") " pod="openstack/cinder-api-0" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.439216 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/310d4179-66e9-4979-984a-3844494fe6ab-config-data-custom\") pod \"cinder-api-0\" (UID: \"310d4179-66e9-4979-984a-3844494fe6ab\") " pod="openstack/cinder-api-0" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.449585 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbxw9\" (UniqueName: \"kubernetes.io/projected/310d4179-66e9-4979-984a-3844494fe6ab-kube-api-access-gbxw9\") pod \"cinder-api-0\" (UID: \"310d4179-66e9-4979-984a-3844494fe6ab\") " pod="openstack/cinder-api-0" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.571113 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.756503 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-57b9fd55d-zs44x"] Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.917381 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ad498b5-0999-4dc6-984f-154bd501f036" path="/var/lib/kubelet/pods/1ad498b5-0999-4dc6-984f-154bd501f036/volumes" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.919202 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c359d03-e59e-4b85-8599-826a340acc8f" path="/var/lib/kubelet/pods/4c359d03-e59e-4b85-8599-826a340acc8f/volumes" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.948161 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.949316 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-g98xl" Feb 16 13:55:29 crc kubenswrapper[4812]: I0216 13:55:29.949670 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57b9fd55d-zs44x" event={"ID":"b743ee5f-7d4b-4e37-b46f-449f1c1155f9","Type":"ContainerStarted","Data":"262389dba772db7e93e731ed203b582f1c9879a65f8f86cd24b59c864706a447"} Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.004795 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-g98xl" Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.048233 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f779db3-6985-40e2-ba00-85650a832066-dns-swift-storage-0\") pod \"6f779db3-6985-40e2-ba00-85650a832066\" (UID: \"6f779db3-6985-40e2-ba00-85650a832066\") " Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.048383 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f779db3-6985-40e2-ba00-85650a832066-dns-svc\") pod \"6f779db3-6985-40e2-ba00-85650a832066\" (UID: \"6f779db3-6985-40e2-ba00-85650a832066\") " Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.048421 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f779db3-6985-40e2-ba00-85650a832066-ovsdbserver-sb\") pod \"6f779db3-6985-40e2-ba00-85650a832066\" (UID: \"6f779db3-6985-40e2-ba00-85650a832066\") " Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.048888 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkdbt\" (UniqueName: \"kubernetes.io/projected/6f779db3-6985-40e2-ba00-85650a832066-kube-api-access-kkdbt\") pod \"6f779db3-6985-40e2-ba00-85650a832066\" (UID: \"6f779db3-6985-40e2-ba00-85650a832066\") " Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.049029 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f779db3-6985-40e2-ba00-85650a832066-config\") pod \"6f779db3-6985-40e2-ba00-85650a832066\" (UID: \"6f779db3-6985-40e2-ba00-85650a832066\") " Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.049064 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f779db3-6985-40e2-ba00-85650a832066-ovsdbserver-nb\") pod \"6f779db3-6985-40e2-ba00-85650a832066\" (UID: \"6f779db3-6985-40e2-ba00-85650a832066\") " Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.054095 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f779db3-6985-40e2-ba00-85650a832066-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6f779db3-6985-40e2-ba00-85650a832066" (UID: "6f779db3-6985-40e2-ba00-85650a832066"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.054401 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f779db3-6985-40e2-ba00-85650a832066-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6f779db3-6985-40e2-ba00-85650a832066" (UID: "6f779db3-6985-40e2-ba00-85650a832066"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.054493 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f779db3-6985-40e2-ba00-85650a832066-config" (OuterVolumeSpecName: "config") pod "6f779db3-6985-40e2-ba00-85650a832066" (UID: "6f779db3-6985-40e2-ba00-85650a832066"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.054781 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f779db3-6985-40e2-ba00-85650a832066-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6f779db3-6985-40e2-ba00-85650a832066" (UID: "6f779db3-6985-40e2-ba00-85650a832066"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.055013 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f779db3-6985-40e2-ba00-85650a832066-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6f779db3-6985-40e2-ba00-85650a832066" (UID: "6f779db3-6985-40e2-ba00-85650a832066"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.058058 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f779db3-6985-40e2-ba00-85650a832066-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.058111 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f779db3-6985-40e2-ba00-85650a832066-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.058132 4812 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f779db3-6985-40e2-ba00-85650a832066-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.058149 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f779db3-6985-40e2-ba00-85650a832066-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.058171 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f779db3-6985-40e2-ba00-85650a832066-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.063403 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.066494 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f779db3-6985-40e2-ba00-85650a832066-kube-api-access-kkdbt" (OuterVolumeSpecName: "kube-api-access-kkdbt") pod "6f779db3-6985-40e2-ba00-85650a832066" (UID: "6f779db3-6985-40e2-ba00-85650a832066"). InnerVolumeSpecName "kube-api-access-kkdbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.113889 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.134184 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.141592 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.147243 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.160205 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.160322 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.167308 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3ea4e67-222a-4f37-9d17-371c857ef7c4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b3ea4e67-222a-4f37-9d17-371c857ef7c4\") " pod="openstack/ceilometer-0" Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.167614 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j25m\" (UniqueName: \"kubernetes.io/projected/b3ea4e67-222a-4f37-9d17-371c857ef7c4-kube-api-access-7j25m\") pod \"ceilometer-0\" (UID: \"b3ea4e67-222a-4f37-9d17-371c857ef7c4\") " pod="openstack/ceilometer-0" Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.168482 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3ea4e67-222a-4f37-9d17-371c857ef7c4-scripts\") pod \"ceilometer-0\" (UID: \"b3ea4e67-222a-4f37-9d17-371c857ef7c4\") " pod="openstack/ceilometer-0" Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.168971 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3ea4e67-222a-4f37-9d17-371c857ef7c4-log-httpd\") pod \"ceilometer-0\" (UID: \"b3ea4e67-222a-4f37-9d17-371c857ef7c4\") " pod="openstack/ceilometer-0" Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.169035 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3ea4e67-222a-4f37-9d17-371c857ef7c4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b3ea4e67-222a-4f37-9d17-371c857ef7c4\") " pod="openstack/ceilometer-0" Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.169171 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3ea4e67-222a-4f37-9d17-371c857ef7c4-run-httpd\") pod \"ceilometer-0\" (UID: \"b3ea4e67-222a-4f37-9d17-371c857ef7c4\") " pod="openstack/ceilometer-0" Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.169212 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3ea4e67-222a-4f37-9d17-371c857ef7c4-config-data\") pod \"ceilometer-0\" (UID: \"b3ea4e67-222a-4f37-9d17-371c857ef7c4\") " pod="openstack/ceilometer-0" Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.169301 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkdbt\" (UniqueName: \"kubernetes.io/projected/6f779db3-6985-40e2-ba00-85650a832066-kube-api-access-kkdbt\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.170217 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-59c64f6659-7rr8v"] Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.191630 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.275485 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3ea4e67-222a-4f37-9d17-371c857ef7c4-log-httpd\") pod \"ceilometer-0\" (UID: \"b3ea4e67-222a-4f37-9d17-371c857ef7c4\") " pod="openstack/ceilometer-0" Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.275552 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3ea4e67-222a-4f37-9d17-371c857ef7c4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b3ea4e67-222a-4f37-9d17-371c857ef7c4\") " pod="openstack/ceilometer-0" Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.275621 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3ea4e67-222a-4f37-9d17-371c857ef7c4-run-httpd\") pod \"ceilometer-0\" (UID: \"b3ea4e67-222a-4f37-9d17-371c857ef7c4\") " pod="openstack/ceilometer-0" Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.275650 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3ea4e67-222a-4f37-9d17-371c857ef7c4-config-data\") pod \"ceilometer-0\" (UID: \"b3ea4e67-222a-4f37-9d17-371c857ef7c4\") " pod="openstack/ceilometer-0" Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.275698 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3ea4e67-222a-4f37-9d17-371c857ef7c4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b3ea4e67-222a-4f37-9d17-371c857ef7c4\") " pod="openstack/ceilometer-0" Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.275754 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j25m\" (UniqueName: \"kubernetes.io/projected/b3ea4e67-222a-4f37-9d17-371c857ef7c4-kube-api-access-7j25m\") pod \"ceilometer-0\" (UID: \"b3ea4e67-222a-4f37-9d17-371c857ef7c4\") " pod="openstack/ceilometer-0" Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.275815 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3ea4e67-222a-4f37-9d17-371c857ef7c4-scripts\") pod \"ceilometer-0\" (UID: \"b3ea4e67-222a-4f37-9d17-371c857ef7c4\") " pod="openstack/ceilometer-0" Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.276344 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3ea4e67-222a-4f37-9d17-371c857ef7c4-log-httpd\") pod \"ceilometer-0\" (UID: \"b3ea4e67-222a-4f37-9d17-371c857ef7c4\") " pod="openstack/ceilometer-0" Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.276373 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3ea4e67-222a-4f37-9d17-371c857ef7c4-run-httpd\") pod \"ceilometer-0\" (UID: \"b3ea4e67-222a-4f37-9d17-371c857ef7c4\") " pod="openstack/ceilometer-0" Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.290522 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3ea4e67-222a-4f37-9d17-371c857ef7c4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b3ea4e67-222a-4f37-9d17-371c857ef7c4\") " pod="openstack/ceilometer-0" Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.290680 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3ea4e67-222a-4f37-9d17-371c857ef7c4-config-data\") pod \"ceilometer-0\" (UID: \"b3ea4e67-222a-4f37-9d17-371c857ef7c4\") " pod="openstack/ceilometer-0" Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.292341 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3ea4e67-222a-4f37-9d17-371c857ef7c4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b3ea4e67-222a-4f37-9d17-371c857ef7c4\") " pod="openstack/ceilometer-0" Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.302815 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3ea4e67-222a-4f37-9d17-371c857ef7c4-scripts\") pod \"ceilometer-0\" (UID: \"b3ea4e67-222a-4f37-9d17-371c857ef7c4\") " pod="openstack/ceilometer-0" Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.304359 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j25m\" (UniqueName: \"kubernetes.io/projected/b3ea4e67-222a-4f37-9d17-371c857ef7c4-kube-api-access-7j25m\") pod \"ceilometer-0\" (UID: \"b3ea4e67-222a-4f37-9d17-371c857ef7c4\") " pod="openstack/ceilometer-0" Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.359044 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-cvglp"] Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.552136 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.602505 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-775b4c67cd-9n6f8"] Feb 16 13:55:30 crc kubenswrapper[4812]: I0216 13:55:30.627873 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 13:55:30 crc kubenswrapper[4812]: W0216 13:55:30.768889 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod310d4179_66e9_4979_984a_3844494fe6ab.slice/crio-4b6e8ec2f0fbea4be90efa68d9f98a0850607cdffb24628a3e7ba58b48a6b809 WatchSource:0}: Error finding container 4b6e8ec2f0fbea4be90efa68d9f98a0850607cdffb24628a3e7ba58b48a6b809: Status 404 returned error can't find the container with id 4b6e8ec2f0fbea4be90efa68d9f98a0850607cdffb24628a3e7ba58b48a6b809 Feb 16 13:55:31 crc kubenswrapper[4812]: I0216 13:55:31.010803 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-59c64f6659-7rr8v" event={"ID":"1e7c7a64-8967-4ee4-af38-c6d384fbd722","Type":"ContainerStarted","Data":"9331b1e1a92ef778c8578146e94b4b24dc0241fe1fca04017fa831d6366a666e"} Feb 16 13:55:31 crc kubenswrapper[4812]: I0216 13:55:31.024793 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"310d4179-66e9-4979-984a-3844494fe6ab","Type":"ContainerStarted","Data":"4b6e8ec2f0fbea4be90efa68d9f98a0850607cdffb24628a3e7ba58b48a6b809"} Feb 16 13:55:31 crc kubenswrapper[4812]: I0216 13:55:31.043718 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-775b4c67cd-9n6f8" event={"ID":"938fb099-5861-4d4f-8105-bcd26cbbcabd","Type":"ContainerStarted","Data":"e2ed4eace6eda27e37954409d269577e6fcfd7e387717a956447c1d0da44e2b3"} Feb 16 13:55:31 crc kubenswrapper[4812]: I0216 13:55:31.059957 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4f2f2975-b4e8-4a6e-8a42-4eaeca03595a","Type":"ContainerStarted","Data":"b71d9e07ae9e66442e62832aaba233ffc416ff67e983dccd905b6ba0ea9a5f49"} Feb 16 13:55:31 crc kubenswrapper[4812]: I0216 13:55:31.071645 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-g98xl" Feb 16 13:55:31 crc kubenswrapper[4812]: I0216 13:55:31.082098 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-cvglp" event={"ID":"a81f17cc-32a6-4089-bf61-ea63d46b7f60","Type":"ContainerStarted","Data":"efa52ebc94cf519afe3a362337a88f0439772e0a65e19e97fb76402c532703a4"} Feb 16 13:55:31 crc kubenswrapper[4812]: I0216 13:55:31.201556 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-g98xl"] Feb 16 13:55:31 crc kubenswrapper[4812]: I0216 13:55:31.306874 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-g98xl"] Feb 16 13:55:31 crc kubenswrapper[4812]: I0216 13:55:31.482867 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:55:31 crc kubenswrapper[4812]: W0216 13:55:31.526669 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3ea4e67_222a_4f37_9d17_371c857ef7c4.slice/crio-ab82d7434e0e5eafde2202594edaa832b51173c34f47c36eac8a45c2ef7ea76b WatchSource:0}: Error finding container ab82d7434e0e5eafde2202594edaa832b51173c34f47c36eac8a45c2ef7ea76b: Status 404 returned error can't find the container with id ab82d7434e0e5eafde2202594edaa832b51173c34f47c36eac8a45c2ef7ea76b Feb 16 13:55:31 crc kubenswrapper[4812]: I0216 13:55:31.929884 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a" path="/var/lib/kubelet/pods/3b0bb4ed-3db1-4d70-8462-cb6d8ba2023a/volumes" Feb 16 13:55:31 crc kubenswrapper[4812]: I0216 13:55:31.933950 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f779db3-6985-40e2-ba00-85650a832066" path="/var/lib/kubelet/pods/6f779db3-6985-40e2-ba00-85650a832066/volumes" Feb 16 13:55:32 crc kubenswrapper[4812]: I0216 13:55:32.141187 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3ea4e67-222a-4f37-9d17-371c857ef7c4","Type":"ContainerStarted","Data":"ab82d7434e0e5eafde2202594edaa832b51173c34f47c36eac8a45c2ef7ea76b"} Feb 16 13:55:32 crc kubenswrapper[4812]: I0216 13:55:32.150884 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-775b4c67cd-9n6f8" event={"ID":"938fb099-5861-4d4f-8105-bcd26cbbcabd","Type":"ContainerStarted","Data":"99df326f5bca081a2c902aa913c70c7ca3924a5448a2a5ff7018d960f3d48b33"} Feb 16 13:55:32 crc kubenswrapper[4812]: I0216 13:55:32.150960 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-775b4c67cd-9n6f8" event={"ID":"938fb099-5861-4d4f-8105-bcd26cbbcabd","Type":"ContainerStarted","Data":"6a8312078a00af4cd140ab5529cf65ddfed82d96b8d60bbab70197276af0dc09"} Feb 16 13:55:32 crc kubenswrapper[4812]: I0216 13:55:32.151456 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-775b4c67cd-9n6f8" Feb 16 13:55:32 crc kubenswrapper[4812]: I0216 13:55:32.152180 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-775b4c67cd-9n6f8" Feb 16 13:55:32 crc kubenswrapper[4812]: I0216 13:55:32.187188 4812 generic.go:334] "Generic (PLEG): container finished" podID="e3116255-f9dd-4ce3-bf47-779d963bbb98" containerID="d69d79311177b7da6862aee96773e69561edd5a71ea05e7465f24c06bc7f0478" exitCode=0 Feb 16 13:55:32 crc kubenswrapper[4812]: I0216 13:55:32.187387 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e3116255-f9dd-4ce3-bf47-779d963bbb98","Type":"ContainerDied","Data":"d69d79311177b7da6862aee96773e69561edd5a71ea05e7465f24c06bc7f0478"} Feb 16 13:55:32 crc kubenswrapper[4812]: I0216 13:55:32.213686 4812 generic.go:334] "Generic (PLEG): container finished" podID="a81f17cc-32a6-4089-bf61-ea63d46b7f60" containerID="c1efe357abcc44912a5c1e69a578e74717e0f136c9784583633b74c622eac395" exitCode=0 Feb 16 13:55:32 crc kubenswrapper[4812]: I0216 13:55:32.213776 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-cvglp" event={"ID":"a81f17cc-32a6-4089-bf61-ea63d46b7f60","Type":"ContainerDied","Data":"c1efe357abcc44912a5c1e69a578e74717e0f136c9784583633b74c622eac395"} Feb 16 13:55:32 crc kubenswrapper[4812]: I0216 13:55:32.236334 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 16 13:55:32 crc kubenswrapper[4812]: I0216 13:55:32.263258 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-775b4c67cd-9n6f8" podStartSLOduration=4.263223034 podStartE2EDuration="4.263223034s" podCreationTimestamp="2026-02-16 13:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:55:32.201032067 +0000 UTC m=+1421.265362778" watchObservedRunningTime="2026-02-16 13:55:32.263223034 +0000 UTC m=+1421.327553735" Feb 16 13:55:32 crc kubenswrapper[4812]: I0216 13:55:32.482588 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s95g8" Feb 16 13:55:32 crc kubenswrapper[4812]: I0216 13:55:32.617654 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s95g8" Feb 16 13:55:33 crc kubenswrapper[4812]: E0216 13:55:33.008012 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 13:55:33 crc kubenswrapper[4812]: E0216 13:55:33.008621 4812 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 13:55:33 crc kubenswrapper[4812]: E0216 13:55:33.009434 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mq54s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-krnzs_openstack(a7d4eae6-781f-4675-a6c3-ee0f1589c735): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 13:55:33 crc kubenswrapper[4812]: E0216 13:55:33.010614 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 13:55:33 crc kubenswrapper[4812]: I0216 13:55:33.091665 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s95g8"] Feb 16 13:55:33 crc kubenswrapper[4812]: I0216 13:55:33.263118 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"310d4179-66e9-4979-984a-3844494fe6ab","Type":"ContainerStarted","Data":"453be0dbb6cf87854dc9acdace0fb6a2b1c825d83c67983690d7c500a3566d40"} Feb 16 13:55:33 crc kubenswrapper[4812]: I0216 13:55:33.269424 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4f2f2975-b4e8-4a6e-8a42-4eaeca03595a","Type":"ContainerStarted","Data":"d3b7780a900fdcea6a91e94c527cbab05a8a72f3d47ea137d55344ae7b997b60"} Feb 16 13:55:34 crc kubenswrapper[4812]: I0216 13:55:34.289660 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s95g8" podUID="b4a72604-ad70-4ca7-97fc-582483d19fd1" containerName="registry-server" containerID="cri-o://e3a12be9d8ac6087efeac66f8f824c59971594252fd744220f51230726a15a00" gracePeriod=2 Feb 16 13:55:35 crc kubenswrapper[4812]: I0216 13:55:35.309828 4812 generic.go:334] "Generic (PLEG): container finished" podID="b4a72604-ad70-4ca7-97fc-582483d19fd1" containerID="e3a12be9d8ac6087efeac66f8f824c59971594252fd744220f51230726a15a00" exitCode=0 Feb 16 13:55:35 crc kubenswrapper[4812]: I0216 13:55:35.309926 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s95g8" event={"ID":"b4a72604-ad70-4ca7-97fc-582483d19fd1","Type":"ContainerDied","Data":"e3a12be9d8ac6087efeac66f8f824c59971594252fd744220f51230726a15a00"} Feb 16 13:55:35 crc kubenswrapper[4812]: I0216 13:55:35.950756 4812 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod1eb07864-3ace-404d-b092-271e2a57e677"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod1eb07864-3ace-404d-b092-271e2a57e677] : Timed out while waiting for systemd to remove kubepods-besteffort-pod1eb07864_3ace_404d_b092_271e2a57e677.slice" Feb 16 13:55:35 crc kubenswrapper[4812]: E0216 13:55:35.951736 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod1eb07864-3ace-404d-b092-271e2a57e677] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod1eb07864-3ace-404d-b092-271e2a57e677] : Timed out while waiting for systemd to remove kubepods-besteffort-pod1eb07864_3ace_404d_b092_271e2a57e677.slice" pod="openstack/dnsmasq-dns-698758b865-jbrfm" podUID="1eb07864-3ace-404d-b092-271e2a57e677" Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.028749 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s95g8" Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.095813 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-dd87694f4-8qsk9"] Feb 16 13:55:36 crc kubenswrapper[4812]: E0216 13:55:36.096497 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4a72604-ad70-4ca7-97fc-582483d19fd1" containerName="extract-content" Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.096526 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4a72604-ad70-4ca7-97fc-582483d19fd1" containerName="extract-content" Feb 16 13:55:36 crc kubenswrapper[4812]: E0216 13:55:36.096563 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4a72604-ad70-4ca7-97fc-582483d19fd1" containerName="registry-server" Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.096574 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4a72604-ad70-4ca7-97fc-582483d19fd1" containerName="registry-server" Feb 16 13:55:36 crc kubenswrapper[4812]: E0216 13:55:36.096601 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4a72604-ad70-4ca7-97fc-582483d19fd1" containerName="extract-utilities" Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.096612 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4a72604-ad70-4ca7-97fc-582483d19fd1" containerName="extract-utilities" Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.096929 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4a72604-ad70-4ca7-97fc-582483d19fd1" containerName="registry-server" Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.102587 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-dd87694f4-8qsk9" Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.113933 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.114574 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.115498 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c51849be-b016-41a0-9959-654f56fd10c2-config-data-custom\") pod \"barbican-api-dd87694f4-8qsk9\" (UID: \"c51849be-b016-41a0-9959-654f56fd10c2\") " pod="openstack/barbican-api-dd87694f4-8qsk9" Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.115738 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c51849be-b016-41a0-9959-654f56fd10c2-internal-tls-certs\") pod \"barbican-api-dd87694f4-8qsk9\" (UID: \"c51849be-b016-41a0-9959-654f56fd10c2\") " pod="openstack/barbican-api-dd87694f4-8qsk9" Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.115984 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c51849be-b016-41a0-9959-654f56fd10c2-public-tls-certs\") pod \"barbican-api-dd87694f4-8qsk9\" (UID: \"c51849be-b016-41a0-9959-654f56fd10c2\") " pod="openstack/barbican-api-dd87694f4-8qsk9" Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.116098 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ggzq\" (UniqueName: \"kubernetes.io/projected/c51849be-b016-41a0-9959-654f56fd10c2-kube-api-access-8ggzq\") pod \"barbican-api-dd87694f4-8qsk9\" (UID: \"c51849be-b016-41a0-9959-654f56fd10c2\") " pod="openstack/barbican-api-dd87694f4-8qsk9" Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.116157 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c51849be-b016-41a0-9959-654f56fd10c2-config-data\") pod \"barbican-api-dd87694f4-8qsk9\" (UID: \"c51849be-b016-41a0-9959-654f56fd10c2\") " pod="openstack/barbican-api-dd87694f4-8qsk9" Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.116218 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c51849be-b016-41a0-9959-654f56fd10c2-combined-ca-bundle\") pod \"barbican-api-dd87694f4-8qsk9\" (UID: \"c51849be-b016-41a0-9959-654f56fd10c2\") " pod="openstack/barbican-api-dd87694f4-8qsk9" Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.116346 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c51849be-b016-41a0-9959-654f56fd10c2-logs\") pod \"barbican-api-dd87694f4-8qsk9\" (UID: \"c51849be-b016-41a0-9959-654f56fd10c2\") " pod="openstack/barbican-api-dd87694f4-8qsk9" Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.211644 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-dd87694f4-8qsk9"] Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.221968 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4a72604-ad70-4ca7-97fc-582483d19fd1-utilities\") pod \"b4a72604-ad70-4ca7-97fc-582483d19fd1\" (UID: \"b4a72604-ad70-4ca7-97fc-582483d19fd1\") " Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.222061 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4a72604-ad70-4ca7-97fc-582483d19fd1-catalog-content\") pod \"b4a72604-ad70-4ca7-97fc-582483d19fd1\" (UID: \"b4a72604-ad70-4ca7-97fc-582483d19fd1\") " Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.222170 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58mxl\" (UniqueName: \"kubernetes.io/projected/b4a72604-ad70-4ca7-97fc-582483d19fd1-kube-api-access-58mxl\") pod \"b4a72604-ad70-4ca7-97fc-582483d19fd1\" (UID: \"b4a72604-ad70-4ca7-97fc-582483d19fd1\") " Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.222681 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c51849be-b016-41a0-9959-654f56fd10c2-config-data-custom\") pod \"barbican-api-dd87694f4-8qsk9\" (UID: \"c51849be-b016-41a0-9959-654f56fd10c2\") " pod="openstack/barbican-api-dd87694f4-8qsk9" Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.222744 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c51849be-b016-41a0-9959-654f56fd10c2-internal-tls-certs\") pod \"barbican-api-dd87694f4-8qsk9\" (UID: \"c51849be-b016-41a0-9959-654f56fd10c2\") " pod="openstack/barbican-api-dd87694f4-8qsk9" Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.222828 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c51849be-b016-41a0-9959-654f56fd10c2-public-tls-certs\") pod \"barbican-api-dd87694f4-8qsk9\" (UID: \"c51849be-b016-41a0-9959-654f56fd10c2\") " pod="openstack/barbican-api-dd87694f4-8qsk9" Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.222879 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ggzq\" (UniqueName: \"kubernetes.io/projected/c51849be-b016-41a0-9959-654f56fd10c2-kube-api-access-8ggzq\") pod \"barbican-api-dd87694f4-8qsk9\" (UID: \"c51849be-b016-41a0-9959-654f56fd10c2\") " pod="openstack/barbican-api-dd87694f4-8qsk9" Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.222909 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c51849be-b016-41a0-9959-654f56fd10c2-config-data\") pod \"barbican-api-dd87694f4-8qsk9\" (UID: \"c51849be-b016-41a0-9959-654f56fd10c2\") " pod="openstack/barbican-api-dd87694f4-8qsk9" Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.222942 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c51849be-b016-41a0-9959-654f56fd10c2-combined-ca-bundle\") pod \"barbican-api-dd87694f4-8qsk9\" (UID: \"c51849be-b016-41a0-9959-654f56fd10c2\") " pod="openstack/barbican-api-dd87694f4-8qsk9" Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.223005 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c51849be-b016-41a0-9959-654f56fd10c2-logs\") pod \"barbican-api-dd87694f4-8qsk9\" (UID: \"c51849be-b016-41a0-9959-654f56fd10c2\") " pod="openstack/barbican-api-dd87694f4-8qsk9" Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.223639 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c51849be-b016-41a0-9959-654f56fd10c2-logs\") pod \"barbican-api-dd87694f4-8qsk9\" (UID: \"c51849be-b016-41a0-9959-654f56fd10c2\") " pod="openstack/barbican-api-dd87694f4-8qsk9" Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.225017 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4a72604-ad70-4ca7-97fc-582483d19fd1-utilities" (OuterVolumeSpecName: "utilities") pod "b4a72604-ad70-4ca7-97fc-582483d19fd1" (UID: "b4a72604-ad70-4ca7-97fc-582483d19fd1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.238777 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c51849be-b016-41a0-9959-654f56fd10c2-config-data\") pod \"barbican-api-dd87694f4-8qsk9\" (UID: \"c51849be-b016-41a0-9959-654f56fd10c2\") " pod="openstack/barbican-api-dd87694f4-8qsk9" Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.256276 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c51849be-b016-41a0-9959-654f56fd10c2-public-tls-certs\") pod \"barbican-api-dd87694f4-8qsk9\" (UID: \"c51849be-b016-41a0-9959-654f56fd10c2\") " pod="openstack/barbican-api-dd87694f4-8qsk9" Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.256608 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c51849be-b016-41a0-9959-654f56fd10c2-config-data-custom\") pod \"barbican-api-dd87694f4-8qsk9\" (UID: \"c51849be-b016-41a0-9959-654f56fd10c2\") " pod="openstack/barbican-api-dd87694f4-8qsk9" Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.256667 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ggzq\" (UniqueName: \"kubernetes.io/projected/c51849be-b016-41a0-9959-654f56fd10c2-kube-api-access-8ggzq\") pod \"barbican-api-dd87694f4-8qsk9\" (UID: \"c51849be-b016-41a0-9959-654f56fd10c2\") " pod="openstack/barbican-api-dd87694f4-8qsk9" Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.290252 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c51849be-b016-41a0-9959-654f56fd10c2-combined-ca-bundle\") pod \"barbican-api-dd87694f4-8qsk9\" (UID: \"c51849be-b016-41a0-9959-654f56fd10c2\") " pod="openstack/barbican-api-dd87694f4-8qsk9" Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.294608 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4a72604-ad70-4ca7-97fc-582483d19fd1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b4a72604-ad70-4ca7-97fc-582483d19fd1" (UID: "b4a72604-ad70-4ca7-97fc-582483d19fd1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.303834 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c51849be-b016-41a0-9959-654f56fd10c2-internal-tls-certs\") pod \"barbican-api-dd87694f4-8qsk9\" (UID: \"c51849be-b016-41a0-9959-654f56fd10c2\") " pod="openstack/barbican-api-dd87694f4-8qsk9" Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.310768 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4a72604-ad70-4ca7-97fc-582483d19fd1-kube-api-access-58mxl" (OuterVolumeSpecName: "kube-api-access-58mxl") pod "b4a72604-ad70-4ca7-97fc-582483d19fd1" (UID: "b4a72604-ad70-4ca7-97fc-582483d19fd1"). InnerVolumeSpecName "kube-api-access-58mxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.336331 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4a72604-ad70-4ca7-97fc-582483d19fd1-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.336378 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4a72604-ad70-4ca7-97fc-582483d19fd1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.336392 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58mxl\" (UniqueName: \"kubernetes.io/projected/b4a72604-ad70-4ca7-97fc-582483d19fd1-kube-api-access-58mxl\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.386755 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e3116255-f9dd-4ce3-bf47-779d963bbb98","Type":"ContainerStarted","Data":"c2ad693360806131430545e0c62113dfecd570e120a5b1d4995ae4a5d4295a90"} Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.401862 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-cvglp" event={"ID":"a81f17cc-32a6-4089-bf61-ea63d46b7f60","Type":"ContainerStarted","Data":"a1daf64fd7e2e69a58e858950dcff8123d50b7db258267cd2dcedea00fcade73"} Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.408196 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-cvglp" Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.430811 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-jbrfm" Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.431963 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s95g8" event={"ID":"b4a72604-ad70-4ca7-97fc-582483d19fd1","Type":"ContainerDied","Data":"4e95b84b6fcefd373e0f1c7648a15892ccddb4a7db1f37a6c1ce0a029896636b"} Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.432083 4812 scope.go:117] "RemoveContainer" containerID="e3a12be9d8ac6087efeac66f8f824c59971594252fd744220f51230726a15a00" Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.432479 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s95g8" Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.491384 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-cvglp" podStartSLOduration=8.491345233 podStartE2EDuration="8.491345233s" podCreationTimestamp="2026-02-16 13:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:55:36.448803126 +0000 UTC m=+1425.513133847" watchObservedRunningTime="2026-02-16 13:55:36.491345233 +0000 UTC m=+1425.555675934" Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.544958 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-dd87694f4-8qsk9" Feb 16 13:55:36 crc kubenswrapper[4812]: I0216 13:55:36.980701 4812 scope.go:117] "RemoveContainer" containerID="cef25a4d5103caab9b062bab4abb0dc8020c944a78203aa562102a3bb3cc554b" Feb 16 13:55:37 crc kubenswrapper[4812]: I0216 13:55:37.010563 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-jbrfm"] Feb 16 13:55:37 crc kubenswrapper[4812]: I0216 13:55:37.033557 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-jbrfm"] Feb 16 13:55:37 crc kubenswrapper[4812]: I0216 13:55:37.185855 4812 scope.go:117] "RemoveContainer" containerID="12662c8bde7f09c10ee913f1cd070f8770d38c5f52d13332a248cbc0e3053bec" Feb 16 13:55:37 crc kubenswrapper[4812]: I0216 13:55:37.212296 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s95g8"] Feb 16 13:55:37 crc kubenswrapper[4812]: I0216 13:55:37.254642 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s95g8"] Feb 16 13:55:37 crc kubenswrapper[4812]: I0216 13:55:37.679967 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-59c64f6659-7rr8v" event={"ID":"1e7c7a64-8967-4ee4-af38-c6d384fbd722","Type":"ContainerStarted","Data":"ae9c7ec84cd580e80ef30841aeb7ca5e87bdbd46a32a759cbff949cf291fbe1e"} Feb 16 13:55:37 crc kubenswrapper[4812]: I0216 13:55:37.680562 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-59c64f6659-7rr8v" event={"ID":"1e7c7a64-8967-4ee4-af38-c6d384fbd722","Type":"ContainerStarted","Data":"6e8ab91284d886e96a7db1180a8aa7e443434463fa086952f5d78c2b0e9ed850"} Feb 16 13:55:37 crc kubenswrapper[4812]: I0216 13:55:37.721386 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-59c64f6659-7rr8v" podStartSLOduration=4.299609472 podStartE2EDuration="9.721349654s" podCreationTimestamp="2026-02-16 13:55:28 +0000 UTC" firstStartedPulling="2026-02-16 13:55:30.178612372 +0000 UTC m=+1419.242943073" lastFinishedPulling="2026-02-16 13:55:35.600352544 +0000 UTC m=+1424.664683255" observedRunningTime="2026-02-16 13:55:37.712284781 +0000 UTC m=+1426.776615482" watchObservedRunningTime="2026-02-16 13:55:37.721349654 +0000 UTC m=+1426.785680355" Feb 16 13:55:37 crc kubenswrapper[4812]: I0216 13:55:37.742822 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3ea4e67-222a-4f37-9d17-371c857ef7c4","Type":"ContainerStarted","Data":"477c06b4ba80370c1ae5846f1b3c91e49c6ec8c823f8fb9b2fdad46011225d66"} Feb 16 13:55:37 crc kubenswrapper[4812]: I0216 13:55:37.779886 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57b9fd55d-zs44x" event={"ID":"b743ee5f-7d4b-4e37-b46f-449f1c1155f9","Type":"ContainerStarted","Data":"c5c5b5752804f6d1ad19c5d8fd7ad1a3dc9c601fd379b29b9a967b3ffc58c198"} Feb 16 13:55:37 crc kubenswrapper[4812]: I0216 13:55:37.818234 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-dd87694f4-8qsk9"] Feb 16 13:55:37 crc kubenswrapper[4812]: I0216 13:55:37.933261 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1eb07864-3ace-404d-b092-271e2a57e677" path="/var/lib/kubelet/pods/1eb07864-3ace-404d-b092-271e2a57e677/volumes" Feb 16 13:55:37 crc kubenswrapper[4812]: I0216 13:55:37.946963 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4a72604-ad70-4ca7-97fc-582483d19fd1" path="/var/lib/kubelet/pods/b4a72604-ad70-4ca7-97fc-582483d19fd1/volumes" Feb 16 13:55:38 crc kubenswrapper[4812]: I0216 13:55:38.826211 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4f2f2975-b4e8-4a6e-8a42-4eaeca03595a","Type":"ContainerStarted","Data":"38ca181d57564e3d6ad706c51a9193d16d41a76b7857f3e4ebaba369b407f564"} Feb 16 13:55:38 crc kubenswrapper[4812]: I0216 13:55:38.873252 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"310d4179-66e9-4979-984a-3844494fe6ab","Type":"ContainerStarted","Data":"6cd73ee475a79b5d0021618d5f7387154d7713a18d46ba761f42d3c6d32c274d"} Feb 16 13:55:38 crc kubenswrapper[4812]: I0216 13:55:38.873540 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="310d4179-66e9-4979-984a-3844494fe6ab" containerName="cinder-api-log" containerID="cri-o://453be0dbb6cf87854dc9acdace0fb6a2b1c825d83c67983690d7c500a3566d40" gracePeriod=30 Feb 16 13:55:38 crc kubenswrapper[4812]: I0216 13:55:38.873913 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 16 13:55:38 crc kubenswrapper[4812]: I0216 13:55:38.873982 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="310d4179-66e9-4979-984a-3844494fe6ab" containerName="cinder-api" containerID="cri-o://6cd73ee475a79b5d0021618d5f7387154d7713a18d46ba761f42d3c6d32c274d" gracePeriod=30 Feb 16 13:55:38 crc kubenswrapper[4812]: I0216 13:55:38.898564 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=9.586162915 podStartE2EDuration="10.898537411s" podCreationTimestamp="2026-02-16 13:55:28 +0000 UTC" firstStartedPulling="2026-02-16 13:55:30.178020985 +0000 UTC m=+1419.242351686" lastFinishedPulling="2026-02-16 13:55:31.490395471 +0000 UTC m=+1420.554726182" observedRunningTime="2026-02-16 13:55:38.874348987 +0000 UTC m=+1427.938679708" watchObservedRunningTime="2026-02-16 13:55:38.898537411 +0000 UTC m=+1427.962868112" Feb 16 13:55:38 crc kubenswrapper[4812]: I0216 13:55:38.922072 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-dd87694f4-8qsk9" event={"ID":"c51849be-b016-41a0-9959-654f56fd10c2","Type":"ContainerStarted","Data":"e0ece3db46524053f1f4db89e563af19c8b3e1f82d150d5b9319f4d63671089a"} Feb 16 13:55:38 crc kubenswrapper[4812]: I0216 13:55:38.985115 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57b9fd55d-zs44x" event={"ID":"b743ee5f-7d4b-4e37-b46f-449f1c1155f9","Type":"ContainerStarted","Data":"0bc788d26c65ea059acb967b07ae493b52fd9b6a077c8736d0c6ae723811af1d"} Feb 16 13:55:39 crc kubenswrapper[4812]: I0216 13:55:39.073925 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=10.073880347 podStartE2EDuration="10.073880347s" podCreationTimestamp="2026-02-16 13:55:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:55:38.952663814 +0000 UTC m=+1428.016994515" watchObservedRunningTime="2026-02-16 13:55:39.073880347 +0000 UTC m=+1428.138211048" Feb 16 13:55:39 crc kubenswrapper[4812]: I0216 13:55:39.174473 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 13:55:39 crc kubenswrapper[4812]: I0216 13:55:39.224326 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="4f2f2975-b4e8-4a6e-8a42-4eaeca03595a" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.182:8080/\": dial tcp 10.217.0.182:8080: connect: connection refused" Feb 16 13:55:39 crc kubenswrapper[4812]: I0216 13:55:39.238779 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-57b9fd55d-zs44x" podStartSLOduration=5.408684568 podStartE2EDuration="11.238735009s" podCreationTimestamp="2026-02-16 13:55:28 +0000 UTC" firstStartedPulling="2026-02-16 13:55:29.787560975 +0000 UTC m=+1418.851891676" lastFinishedPulling="2026-02-16 13:55:35.617611406 +0000 UTC m=+1424.681942117" observedRunningTime="2026-02-16 13:55:39.084055643 +0000 UTC m=+1428.148386344" watchObservedRunningTime="2026-02-16 13:55:39.238735009 +0000 UTC m=+1428.303065710" Feb 16 13:55:40 crc kubenswrapper[4812]: I0216 13:55:40.009393 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3ea4e67-222a-4f37-9d17-371c857ef7c4","Type":"ContainerStarted","Data":"5964725c040789733634d317ab49e2313fbaf11ec314384a28443cd22a3b1be7"} Feb 16 13:55:40 crc kubenswrapper[4812]: I0216 13:55:40.016907 4812 generic.go:334] "Generic (PLEG): container finished" podID="310d4179-66e9-4979-984a-3844494fe6ab" containerID="453be0dbb6cf87854dc9acdace0fb6a2b1c825d83c67983690d7c500a3566d40" exitCode=143 Feb 16 13:55:40 crc kubenswrapper[4812]: I0216 13:55:40.016989 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"310d4179-66e9-4979-984a-3844494fe6ab","Type":"ContainerDied","Data":"453be0dbb6cf87854dc9acdace0fb6a2b1c825d83c67983690d7c500a3566d40"} Feb 16 13:55:40 crc kubenswrapper[4812]: I0216 13:55:40.020733 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-dd87694f4-8qsk9" event={"ID":"c51849be-b016-41a0-9959-654f56fd10c2","Type":"ContainerStarted","Data":"3f967f2285666fdf5f13e962b4c273bfb5ef9c372a579b3a7c1496b66ddbd44a"} Feb 16 13:55:41 crc kubenswrapper[4812]: I0216 13:55:41.045195 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-dd87694f4-8qsk9" event={"ID":"c51849be-b016-41a0-9959-654f56fd10c2","Type":"ContainerStarted","Data":"ae27e34638485c82555d2f83b8cd2c4fbc709980d04ea61724840c040aac0f12"} Feb 16 13:55:41 crc kubenswrapper[4812]: I0216 13:55:41.045764 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-dd87694f4-8qsk9" Feb 16 13:55:41 crc kubenswrapper[4812]: I0216 13:55:41.045782 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-dd87694f4-8qsk9" Feb 16 13:55:41 crc kubenswrapper[4812]: I0216 13:55:41.060752 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5dd887c4d-zfnsh" Feb 16 13:55:41 crc kubenswrapper[4812]: I0216 13:55:41.079667 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-dd87694f4-8qsk9" podStartSLOduration=5.079623638 podStartE2EDuration="5.079623638s" podCreationTimestamp="2026-02-16 13:55:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:55:41.072078279 +0000 UTC m=+1430.136409000" watchObservedRunningTime="2026-02-16 13:55:41.079623638 +0000 UTC m=+1430.143954339" Feb 16 13:55:41 crc kubenswrapper[4812]: I0216 13:55:41.115479 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3ea4e67-222a-4f37-9d17-371c857ef7c4","Type":"ContainerStarted","Data":"11d1f796f6c4fbd7354fe51f676de7d0d54912b105777cdecb7f20c0defd22ac"} Feb 16 13:55:41 crc kubenswrapper[4812]: I0216 13:55:41.145957 4812 generic.go:334] "Generic (PLEG): container finished" podID="310d4179-66e9-4979-984a-3844494fe6ab" containerID="6cd73ee475a79b5d0021618d5f7387154d7713a18d46ba761f42d3c6d32c274d" exitCode=0 Feb 16 13:55:41 crc kubenswrapper[4812]: I0216 13:55:41.147652 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"310d4179-66e9-4979-984a-3844494fe6ab","Type":"ContainerDied","Data":"6cd73ee475a79b5d0021618d5f7387154d7713a18d46ba761f42d3c6d32c274d"} Feb 16 13:55:41 crc kubenswrapper[4812]: I0216 13:55:41.636040 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 13:55:41 crc kubenswrapper[4812]: I0216 13:55:41.640351 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5dd887c4d-zfnsh" Feb 16 13:55:41 crc kubenswrapper[4812]: I0216 13:55:41.765967 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/310d4179-66e9-4979-984a-3844494fe6ab-scripts\") pod \"310d4179-66e9-4979-984a-3844494fe6ab\" (UID: \"310d4179-66e9-4979-984a-3844494fe6ab\") " Feb 16 13:55:41 crc kubenswrapper[4812]: I0216 13:55:41.766190 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/310d4179-66e9-4979-984a-3844494fe6ab-config-data-custom\") pod \"310d4179-66e9-4979-984a-3844494fe6ab\" (UID: \"310d4179-66e9-4979-984a-3844494fe6ab\") " Feb 16 13:55:41 crc kubenswrapper[4812]: I0216 13:55:41.766283 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/310d4179-66e9-4979-984a-3844494fe6ab-config-data\") pod \"310d4179-66e9-4979-984a-3844494fe6ab\" (UID: \"310d4179-66e9-4979-984a-3844494fe6ab\") " Feb 16 13:55:41 crc kubenswrapper[4812]: I0216 13:55:41.766393 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbxw9\" (UniqueName: \"kubernetes.io/projected/310d4179-66e9-4979-984a-3844494fe6ab-kube-api-access-gbxw9\") pod \"310d4179-66e9-4979-984a-3844494fe6ab\" (UID: \"310d4179-66e9-4979-984a-3844494fe6ab\") " Feb 16 13:55:41 crc kubenswrapper[4812]: I0216 13:55:41.766433 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/310d4179-66e9-4979-984a-3844494fe6ab-etc-machine-id\") pod \"310d4179-66e9-4979-984a-3844494fe6ab\" (UID: \"310d4179-66e9-4979-984a-3844494fe6ab\") " Feb 16 13:55:41 crc kubenswrapper[4812]: I0216 13:55:41.766633 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/310d4179-66e9-4979-984a-3844494fe6ab-logs\") pod \"310d4179-66e9-4979-984a-3844494fe6ab\" (UID: \"310d4179-66e9-4979-984a-3844494fe6ab\") " Feb 16 13:55:41 crc kubenswrapper[4812]: I0216 13:55:41.766886 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/310d4179-66e9-4979-984a-3844494fe6ab-combined-ca-bundle\") pod \"310d4179-66e9-4979-984a-3844494fe6ab\" (UID: \"310d4179-66e9-4979-984a-3844494fe6ab\") " Feb 16 13:55:41 crc kubenswrapper[4812]: I0216 13:55:41.771634 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/310d4179-66e9-4979-984a-3844494fe6ab-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "310d4179-66e9-4979-984a-3844494fe6ab" (UID: "310d4179-66e9-4979-984a-3844494fe6ab"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:55:41 crc kubenswrapper[4812]: I0216 13:55:41.776046 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/310d4179-66e9-4979-984a-3844494fe6ab-logs" (OuterVolumeSpecName: "logs") pod "310d4179-66e9-4979-984a-3844494fe6ab" (UID: "310d4179-66e9-4979-984a-3844494fe6ab"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:55:41 crc kubenswrapper[4812]: I0216 13:55:41.795094 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/310d4179-66e9-4979-984a-3844494fe6ab-kube-api-access-gbxw9" (OuterVolumeSpecName: "kube-api-access-gbxw9") pod "310d4179-66e9-4979-984a-3844494fe6ab" (UID: "310d4179-66e9-4979-984a-3844494fe6ab"). InnerVolumeSpecName "kube-api-access-gbxw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:55:41 crc kubenswrapper[4812]: I0216 13:55:41.807746 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/310d4179-66e9-4979-984a-3844494fe6ab-scripts" (OuterVolumeSpecName: "scripts") pod "310d4179-66e9-4979-984a-3844494fe6ab" (UID: "310d4179-66e9-4979-984a-3844494fe6ab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:41 crc kubenswrapper[4812]: I0216 13:55:41.816773 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/310d4179-66e9-4979-984a-3844494fe6ab-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "310d4179-66e9-4979-984a-3844494fe6ab" (UID: "310d4179-66e9-4979-984a-3844494fe6ab"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:41 crc kubenswrapper[4812]: I0216 13:55:41.865765 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/310d4179-66e9-4979-984a-3844494fe6ab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "310d4179-66e9-4979-984a-3844494fe6ab" (UID: "310d4179-66e9-4979-984a-3844494fe6ab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:41 crc kubenswrapper[4812]: I0216 13:55:41.905259 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/310d4179-66e9-4979-984a-3844494fe6ab-config-data" (OuterVolumeSpecName: "config-data") pod "310d4179-66e9-4979-984a-3844494fe6ab" (UID: "310d4179-66e9-4979-984a-3844494fe6ab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:41 crc kubenswrapper[4812]: I0216 13:55:41.911098 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/310d4179-66e9-4979-984a-3844494fe6ab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:41 crc kubenswrapper[4812]: I0216 13:55:41.911153 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/310d4179-66e9-4979-984a-3844494fe6ab-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:41 crc kubenswrapper[4812]: I0216 13:55:41.911169 4812 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/310d4179-66e9-4979-984a-3844494fe6ab-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:41 crc kubenswrapper[4812]: I0216 13:55:41.911178 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/310d4179-66e9-4979-984a-3844494fe6ab-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:41 crc kubenswrapper[4812]: I0216 13:55:41.911189 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbxw9\" (UniqueName: \"kubernetes.io/projected/310d4179-66e9-4979-984a-3844494fe6ab-kube-api-access-gbxw9\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:41 crc kubenswrapper[4812]: I0216 13:55:41.911201 4812 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/310d4179-66e9-4979-984a-3844494fe6ab-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:41 crc kubenswrapper[4812]: I0216 13:55:41.911211 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/310d4179-66e9-4979-984a-3844494fe6ab-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.112995 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5f575bfd48-dqv2k"] Feb 16 13:55:42 crc kubenswrapper[4812]: E0216 13:55:42.113877 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="310d4179-66e9-4979-984a-3844494fe6ab" containerName="cinder-api" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.113911 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="310d4179-66e9-4979-984a-3844494fe6ab" containerName="cinder-api" Feb 16 13:55:42 crc kubenswrapper[4812]: E0216 13:55:42.113969 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="310d4179-66e9-4979-984a-3844494fe6ab" containerName="cinder-api-log" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.113978 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="310d4179-66e9-4979-984a-3844494fe6ab" containerName="cinder-api-log" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.114279 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="310d4179-66e9-4979-984a-3844494fe6ab" containerName="cinder-api" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.114310 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="310d4179-66e9-4979-984a-3844494fe6ab" containerName="cinder-api-log" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.116305 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5f575bfd48-dqv2k" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.198597 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5f575bfd48-dqv2k"] Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.226468 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa2da193-05ce-4fae-968e-5f9a7e2efd2c-public-tls-certs\") pod \"placement-5f575bfd48-dqv2k\" (UID: \"fa2da193-05ce-4fae-968e-5f9a7e2efd2c\") " pod="openstack/placement-5f575bfd48-dqv2k" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.226584 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmkmb\" (UniqueName: \"kubernetes.io/projected/fa2da193-05ce-4fae-968e-5f9a7e2efd2c-kube-api-access-mmkmb\") pod \"placement-5f575bfd48-dqv2k\" (UID: \"fa2da193-05ce-4fae-968e-5f9a7e2efd2c\") " pod="openstack/placement-5f575bfd48-dqv2k" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.226637 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa2da193-05ce-4fae-968e-5f9a7e2efd2c-scripts\") pod \"placement-5f575bfd48-dqv2k\" (UID: \"fa2da193-05ce-4fae-968e-5f9a7e2efd2c\") " pod="openstack/placement-5f575bfd48-dqv2k" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.226676 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa2da193-05ce-4fae-968e-5f9a7e2efd2c-logs\") pod \"placement-5f575bfd48-dqv2k\" (UID: \"fa2da193-05ce-4fae-968e-5f9a7e2efd2c\") " pod="openstack/placement-5f575bfd48-dqv2k" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.226709 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa2da193-05ce-4fae-968e-5f9a7e2efd2c-config-data\") pod \"placement-5f575bfd48-dqv2k\" (UID: \"fa2da193-05ce-4fae-968e-5f9a7e2efd2c\") " pod="openstack/placement-5f575bfd48-dqv2k" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.226769 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa2da193-05ce-4fae-968e-5f9a7e2efd2c-combined-ca-bundle\") pod \"placement-5f575bfd48-dqv2k\" (UID: \"fa2da193-05ce-4fae-968e-5f9a7e2efd2c\") " pod="openstack/placement-5f575bfd48-dqv2k" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.226835 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa2da193-05ce-4fae-968e-5f9a7e2efd2c-internal-tls-certs\") pod \"placement-5f575bfd48-dqv2k\" (UID: \"fa2da193-05ce-4fae-968e-5f9a7e2efd2c\") " pod="openstack/placement-5f575bfd48-dqv2k" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.265664 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"310d4179-66e9-4979-984a-3844494fe6ab","Type":"ContainerDied","Data":"4b6e8ec2f0fbea4be90efa68d9f98a0850607cdffb24628a3e7ba58b48a6b809"} Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.265764 4812 scope.go:117] "RemoveContainer" containerID="6cd73ee475a79b5d0021618d5f7387154d7713a18d46ba761f42d3c6d32c274d" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.266032 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.372088 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa2da193-05ce-4fae-968e-5f9a7e2efd2c-internal-tls-certs\") pod \"placement-5f575bfd48-dqv2k\" (UID: \"fa2da193-05ce-4fae-968e-5f9a7e2efd2c\") " pod="openstack/placement-5f575bfd48-dqv2k" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.373008 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa2da193-05ce-4fae-968e-5f9a7e2efd2c-public-tls-certs\") pod \"placement-5f575bfd48-dqv2k\" (UID: \"fa2da193-05ce-4fae-968e-5f9a7e2efd2c\") " pod="openstack/placement-5f575bfd48-dqv2k" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.373278 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmkmb\" (UniqueName: \"kubernetes.io/projected/fa2da193-05ce-4fae-968e-5f9a7e2efd2c-kube-api-access-mmkmb\") pod \"placement-5f575bfd48-dqv2k\" (UID: \"fa2da193-05ce-4fae-968e-5f9a7e2efd2c\") " pod="openstack/placement-5f575bfd48-dqv2k" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.373399 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa2da193-05ce-4fae-968e-5f9a7e2efd2c-scripts\") pod \"placement-5f575bfd48-dqv2k\" (UID: \"fa2da193-05ce-4fae-968e-5f9a7e2efd2c\") " pod="openstack/placement-5f575bfd48-dqv2k" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.373501 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa2da193-05ce-4fae-968e-5f9a7e2efd2c-logs\") pod \"placement-5f575bfd48-dqv2k\" (UID: \"fa2da193-05ce-4fae-968e-5f9a7e2efd2c\") " pod="openstack/placement-5f575bfd48-dqv2k" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.373566 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa2da193-05ce-4fae-968e-5f9a7e2efd2c-config-data\") pod \"placement-5f575bfd48-dqv2k\" (UID: \"fa2da193-05ce-4fae-968e-5f9a7e2efd2c\") " pod="openstack/placement-5f575bfd48-dqv2k" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.373658 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa2da193-05ce-4fae-968e-5f9a7e2efd2c-combined-ca-bundle\") pod \"placement-5f575bfd48-dqv2k\" (UID: \"fa2da193-05ce-4fae-968e-5f9a7e2efd2c\") " pod="openstack/placement-5f575bfd48-dqv2k" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.384365 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa2da193-05ce-4fae-968e-5f9a7e2efd2c-logs\") pod \"placement-5f575bfd48-dqv2k\" (UID: \"fa2da193-05ce-4fae-968e-5f9a7e2efd2c\") " pod="openstack/placement-5f575bfd48-dqv2k" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.421736 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e3116255-f9dd-4ce3-bf47-779d963bbb98","Type":"ContainerStarted","Data":"f4dbf93ed8db2a8cd07b9a2157a9535edd35d9387299d1591483498abed46d65"} Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.430192 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmkmb\" (UniqueName: \"kubernetes.io/projected/fa2da193-05ce-4fae-968e-5f9a7e2efd2c-kube-api-access-mmkmb\") pod \"placement-5f575bfd48-dqv2k\" (UID: \"fa2da193-05ce-4fae-968e-5f9a7e2efd2c\") " pod="openstack/placement-5f575bfd48-dqv2k" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.430287 4812 scope.go:117] "RemoveContainer" containerID="453be0dbb6cf87854dc9acdace0fb6a2b1c825d83c67983690d7c500a3566d40" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.480651 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa2da193-05ce-4fae-968e-5f9a7e2efd2c-internal-tls-certs\") pod \"placement-5f575bfd48-dqv2k\" (UID: \"fa2da193-05ce-4fae-968e-5f9a7e2efd2c\") " pod="openstack/placement-5f575bfd48-dqv2k" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.481363 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa2da193-05ce-4fae-968e-5f9a7e2efd2c-combined-ca-bundle\") pod \"placement-5f575bfd48-dqv2k\" (UID: \"fa2da193-05ce-4fae-968e-5f9a7e2efd2c\") " pod="openstack/placement-5f575bfd48-dqv2k" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.481727 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa2da193-05ce-4fae-968e-5f9a7e2efd2c-scripts\") pod \"placement-5f575bfd48-dqv2k\" (UID: \"fa2da193-05ce-4fae-968e-5f9a7e2efd2c\") " pod="openstack/placement-5f575bfd48-dqv2k" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.483324 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa2da193-05ce-4fae-968e-5f9a7e2efd2c-public-tls-certs\") pod \"placement-5f575bfd48-dqv2k\" (UID: \"fa2da193-05ce-4fae-968e-5f9a7e2efd2c\") " pod="openstack/placement-5f575bfd48-dqv2k" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.491725 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa2da193-05ce-4fae-968e-5f9a7e2efd2c-config-data\") pod \"placement-5f575bfd48-dqv2k\" (UID: \"fa2da193-05ce-4fae-968e-5f9a7e2efd2c\") " pod="openstack/placement-5f575bfd48-dqv2k" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.526565 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.670402 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.684574 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5f575bfd48-dqv2k" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.710107 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.713798 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.718776 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.719503 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.719751 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.727146 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.836606 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/73d33b57-0c02-4e05-b1a2-0d3075385bd4-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"73d33b57-0c02-4e05-b1a2-0d3075385bd4\") " pod="openstack/cinder-api-0" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.836684 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73d33b57-0c02-4e05-b1a2-0d3075385bd4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"73d33b57-0c02-4e05-b1a2-0d3075385bd4\") " pod="openstack/cinder-api-0" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.836736 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73d33b57-0c02-4e05-b1a2-0d3075385bd4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"73d33b57-0c02-4e05-b1a2-0d3075385bd4\") " pod="openstack/cinder-api-0" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.836761 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73d33b57-0c02-4e05-b1a2-0d3075385bd4-logs\") pod \"cinder-api-0\" (UID: \"73d33b57-0c02-4e05-b1a2-0d3075385bd4\") " pod="openstack/cinder-api-0" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.836796 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73d33b57-0c02-4e05-b1a2-0d3075385bd4-config-data\") pod \"cinder-api-0\" (UID: \"73d33b57-0c02-4e05-b1a2-0d3075385bd4\") " pod="openstack/cinder-api-0" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.836831 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/73d33b57-0c02-4e05-b1a2-0d3075385bd4-public-tls-certs\") pod \"cinder-api-0\" (UID: \"73d33b57-0c02-4e05-b1a2-0d3075385bd4\") " pod="openstack/cinder-api-0" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.836853 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73d33b57-0c02-4e05-b1a2-0d3075385bd4-config-data-custom\") pod \"cinder-api-0\" (UID: \"73d33b57-0c02-4e05-b1a2-0d3075385bd4\") " pod="openstack/cinder-api-0" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.836881 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6fvd\" (UniqueName: \"kubernetes.io/projected/73d33b57-0c02-4e05-b1a2-0d3075385bd4-kube-api-access-p6fvd\") pod \"cinder-api-0\" (UID: \"73d33b57-0c02-4e05-b1a2-0d3075385bd4\") " pod="openstack/cinder-api-0" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.836912 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73d33b57-0c02-4e05-b1a2-0d3075385bd4-scripts\") pod \"cinder-api-0\" (UID: \"73d33b57-0c02-4e05-b1a2-0d3075385bd4\") " pod="openstack/cinder-api-0" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.940472 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/73d33b57-0c02-4e05-b1a2-0d3075385bd4-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"73d33b57-0c02-4e05-b1a2-0d3075385bd4\") " pod="openstack/cinder-api-0" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.941034 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73d33b57-0c02-4e05-b1a2-0d3075385bd4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"73d33b57-0c02-4e05-b1a2-0d3075385bd4\") " pod="openstack/cinder-api-0" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.941126 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73d33b57-0c02-4e05-b1a2-0d3075385bd4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"73d33b57-0c02-4e05-b1a2-0d3075385bd4\") " pod="openstack/cinder-api-0" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.941158 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73d33b57-0c02-4e05-b1a2-0d3075385bd4-logs\") pod \"cinder-api-0\" (UID: \"73d33b57-0c02-4e05-b1a2-0d3075385bd4\") " pod="openstack/cinder-api-0" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.941301 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73d33b57-0c02-4e05-b1a2-0d3075385bd4-config-data\") pod \"cinder-api-0\" (UID: \"73d33b57-0c02-4e05-b1a2-0d3075385bd4\") " pod="openstack/cinder-api-0" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.941454 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/73d33b57-0c02-4e05-b1a2-0d3075385bd4-public-tls-certs\") pod \"cinder-api-0\" (UID: \"73d33b57-0c02-4e05-b1a2-0d3075385bd4\") " pod="openstack/cinder-api-0" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.941509 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73d33b57-0c02-4e05-b1a2-0d3075385bd4-config-data-custom\") pod \"cinder-api-0\" (UID: \"73d33b57-0c02-4e05-b1a2-0d3075385bd4\") " pod="openstack/cinder-api-0" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.941566 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6fvd\" (UniqueName: \"kubernetes.io/projected/73d33b57-0c02-4e05-b1a2-0d3075385bd4-kube-api-access-p6fvd\") pod \"cinder-api-0\" (UID: \"73d33b57-0c02-4e05-b1a2-0d3075385bd4\") " pod="openstack/cinder-api-0" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.941640 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73d33b57-0c02-4e05-b1a2-0d3075385bd4-scripts\") pod \"cinder-api-0\" (UID: \"73d33b57-0c02-4e05-b1a2-0d3075385bd4\") " pod="openstack/cinder-api-0" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.941675 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73d33b57-0c02-4e05-b1a2-0d3075385bd4-logs\") pod \"cinder-api-0\" (UID: \"73d33b57-0c02-4e05-b1a2-0d3075385bd4\") " pod="openstack/cinder-api-0" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.946611 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73d33b57-0c02-4e05-b1a2-0d3075385bd4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"73d33b57-0c02-4e05-b1a2-0d3075385bd4\") " pod="openstack/cinder-api-0" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.952535 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/73d33b57-0c02-4e05-b1a2-0d3075385bd4-public-tls-certs\") pod \"cinder-api-0\" (UID: \"73d33b57-0c02-4e05-b1a2-0d3075385bd4\") " pod="openstack/cinder-api-0" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.953067 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73d33b57-0c02-4e05-b1a2-0d3075385bd4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"73d33b57-0c02-4e05-b1a2-0d3075385bd4\") " pod="openstack/cinder-api-0" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.970785 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73d33b57-0c02-4e05-b1a2-0d3075385bd4-config-data\") pod \"cinder-api-0\" (UID: \"73d33b57-0c02-4e05-b1a2-0d3075385bd4\") " pod="openstack/cinder-api-0" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.982375 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/73d33b57-0c02-4e05-b1a2-0d3075385bd4-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"73d33b57-0c02-4e05-b1a2-0d3075385bd4\") " pod="openstack/cinder-api-0" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.986242 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6fvd\" (UniqueName: \"kubernetes.io/projected/73d33b57-0c02-4e05-b1a2-0d3075385bd4-kube-api-access-p6fvd\") pod \"cinder-api-0\" (UID: \"73d33b57-0c02-4e05-b1a2-0d3075385bd4\") " pod="openstack/cinder-api-0" Feb 16 13:55:42 crc kubenswrapper[4812]: I0216 13:55:42.992705 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73d33b57-0c02-4e05-b1a2-0d3075385bd4-scripts\") pod \"cinder-api-0\" (UID: \"73d33b57-0c02-4e05-b1a2-0d3075385bd4\") " pod="openstack/cinder-api-0" Feb 16 13:55:43 crc kubenswrapper[4812]: I0216 13:55:43.187707 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73d33b57-0c02-4e05-b1a2-0d3075385bd4-config-data-custom\") pod \"cinder-api-0\" (UID: \"73d33b57-0c02-4e05-b1a2-0d3075385bd4\") " pod="openstack/cinder-api-0" Feb 16 13:55:43 crc kubenswrapper[4812]: I0216 13:55:43.199891 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 13:55:43 crc kubenswrapper[4812]: I0216 13:55:43.420935 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-775b4c67cd-9n6f8" podUID="938fb099-5861-4d4f-8105-bcd26cbbcabd" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.183:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 13:55:43 crc kubenswrapper[4812]: I0216 13:55:43.422522 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-775b4c67cd-9n6f8" podUID="938fb099-5861-4d4f-8105-bcd26cbbcabd" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.183:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 13:55:43 crc kubenswrapper[4812]: I0216 13:55:43.450840 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e3116255-f9dd-4ce3-bf47-779d963bbb98","Type":"ContainerStarted","Data":"f7bb33b8cce83621811064cb117b46324550e642c246c53672fadc40a95d18aa"} Feb 16 13:55:43 crc kubenswrapper[4812]: I0216 13:55:43.544744 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=37.54470903 podStartE2EDuration="37.54470903s" podCreationTimestamp="2026-02-16 13:55:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:55:43.535204504 +0000 UTC m=+1432.599535215" watchObservedRunningTime="2026-02-16 13:55:43.54470903 +0000 UTC m=+1432.609039731" Feb 16 13:55:43 crc kubenswrapper[4812]: I0216 13:55:43.660376 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5f575bfd48-dqv2k"] Feb 16 13:55:43 crc kubenswrapper[4812]: W0216 13:55:43.663375 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa2da193_05ce_4fae_968e_5f9a7e2efd2c.slice/crio-e8c0ae583cef1cf5d9a331bd6dc975b9ca764fff2e51b5dfc3b08f8e08eb323f WatchSource:0}: Error finding container e8c0ae583cef1cf5d9a331bd6dc975b9ca764fff2e51b5dfc3b08f8e08eb323f: Status 404 returned error can't find the container with id e8c0ae583cef1cf5d9a331bd6dc975b9ca764fff2e51b5dfc3b08f8e08eb323f Feb 16 13:55:43 crc kubenswrapper[4812]: I0216 13:55:43.950836 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="310d4179-66e9-4979-984a-3844494fe6ab" path="/var/lib/kubelet/pods/310d4179-66e9-4979-984a-3844494fe6ab/volumes" Feb 16 13:55:43 crc kubenswrapper[4812]: I0216 13:55:43.978871 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-775b4c67cd-9n6f8" Feb 16 13:55:44 crc kubenswrapper[4812]: I0216 13:55:44.066681 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 13:55:44 crc kubenswrapper[4812]: I0216 13:55:44.170214 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="4f2f2975-b4e8-4a6e-8a42-4eaeca03595a" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.182:8080/\": dial tcp 10.217.0.182:8080: connect: connection refused" Feb 16 13:55:44 crc kubenswrapper[4812]: I0216 13:55:44.207513 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-f7dcf4bcb-h6jf8" Feb 16 13:55:44 crc kubenswrapper[4812]: I0216 13:55:44.378766 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-775b4c67cd-9n6f8" podUID="938fb099-5861-4d4f-8105-bcd26cbbcabd" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.183:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 13:55:44 crc kubenswrapper[4812]: I0216 13:55:44.390705 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-cvglp" Feb 16 13:55:44 crc kubenswrapper[4812]: I0216 13:55:44.586334 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-4ww9m"] Feb 16 13:55:44 crc kubenswrapper[4812]: I0216 13:55:44.587317 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-4ww9m" podUID="1492db35-d6ea-4d34-b29a-6d5537694379" containerName="dnsmasq-dns" containerID="cri-o://af14d5bffce96d7b25e41a622cb10a5e6fc0c537475ff4731e5e80f15fc12bd1" gracePeriod=10 Feb 16 13:55:44 crc kubenswrapper[4812]: I0216 13:55:44.598972 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f575bfd48-dqv2k" event={"ID":"fa2da193-05ce-4fae-968e-5f9a7e2efd2c","Type":"ContainerStarted","Data":"79d39b14bd68dbd6d5b7b60a94f8ad61f33aae0eeec387c7f03e6d775d5407e5"} Feb 16 13:55:44 crc kubenswrapper[4812]: I0216 13:55:44.599068 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f575bfd48-dqv2k" event={"ID":"fa2da193-05ce-4fae-968e-5f9a7e2efd2c","Type":"ContainerStarted","Data":"e8c0ae583cef1cf5d9a331bd6dc975b9ca764fff2e51b5dfc3b08f8e08eb323f"} Feb 16 13:55:44 crc kubenswrapper[4812]: I0216 13:55:44.614352 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"73d33b57-0c02-4e05-b1a2-0d3075385bd4","Type":"ContainerStarted","Data":"369145d16174dd9fc1aeb9cc4bd8b4d6ca73de67a1e697ca8011e843a9dd0156"} Feb 16 13:55:44 crc kubenswrapper[4812]: I0216 13:55:44.640663 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3ea4e67-222a-4f37-9d17-371c857ef7c4","Type":"ContainerStarted","Data":"ef0d1891d71d6d74ce69141dc770e23d61636db3fd3014496b8d3f465d7e4072"} Feb 16 13:55:44 crc kubenswrapper[4812]: I0216 13:55:44.640871 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 13:55:44 crc kubenswrapper[4812]: I0216 13:55:44.686140 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.107222985 podStartE2EDuration="14.686091697s" podCreationTimestamp="2026-02-16 13:55:30 +0000 UTC" firstStartedPulling="2026-02-16 13:55:31.532170955 +0000 UTC m=+1420.596501656" lastFinishedPulling="2026-02-16 13:55:42.111039657 +0000 UTC m=+1431.175370368" observedRunningTime="2026-02-16 13:55:44.67349823 +0000 UTC m=+1433.737828941" watchObservedRunningTime="2026-02-16 13:55:44.686091697 +0000 UTC m=+1433.750422408" Feb 16 13:55:45 crc kubenswrapper[4812]: I0216 13:55:45.709832 4812 generic.go:334] "Generic (PLEG): container finished" podID="1492db35-d6ea-4d34-b29a-6d5537694379" containerID="af14d5bffce96d7b25e41a622cb10a5e6fc0c537475ff4731e5e80f15fc12bd1" exitCode=0 Feb 16 13:55:45 crc kubenswrapper[4812]: I0216 13:55:45.709966 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-4ww9m" event={"ID":"1492db35-d6ea-4d34-b29a-6d5537694379","Type":"ContainerDied","Data":"af14d5bffce96d7b25e41a622cb10a5e6fc0c537475ff4731e5e80f15fc12bd1"} Feb 16 13:55:45 crc kubenswrapper[4812]: I0216 13:55:45.735326 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f575bfd48-dqv2k" event={"ID":"fa2da193-05ce-4fae-968e-5f9a7e2efd2c","Type":"ContainerStarted","Data":"ff5e3c93dfbfda03625559b75ed37267317f4bdae2098b2031ece3d45db95cff"} Feb 16 13:55:45 crc kubenswrapper[4812]: I0216 13:55:45.735476 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5f575bfd48-dqv2k" Feb 16 13:55:45 crc kubenswrapper[4812]: I0216 13:55:45.736151 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5f575bfd48-dqv2k" Feb 16 13:55:45 crc kubenswrapper[4812]: I0216 13:55:45.816503 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5f575bfd48-dqv2k" podStartSLOduration=4.816437501 podStartE2EDuration="4.816437501s" podCreationTimestamp="2026-02-16 13:55:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:55:45.791891508 +0000 UTC m=+1434.856222209" watchObservedRunningTime="2026-02-16 13:55:45.816437501 +0000 UTC m=+1434.880768202" Feb 16 13:55:45 crc kubenswrapper[4812]: E0216 13:55:45.902834 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 13:55:45 crc kubenswrapper[4812]: I0216 13:55:45.978242 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-4ww9m" Feb 16 13:55:46 crc kubenswrapper[4812]: I0216 13:55:46.028819 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1492db35-d6ea-4d34-b29a-6d5537694379-dns-svc\") pod \"1492db35-d6ea-4d34-b29a-6d5537694379\" (UID: \"1492db35-d6ea-4d34-b29a-6d5537694379\") " Feb 16 13:55:46 crc kubenswrapper[4812]: I0216 13:55:46.028942 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1492db35-d6ea-4d34-b29a-6d5537694379-ovsdbserver-sb\") pod \"1492db35-d6ea-4d34-b29a-6d5537694379\" (UID: \"1492db35-d6ea-4d34-b29a-6d5537694379\") " Feb 16 13:55:46 crc kubenswrapper[4812]: I0216 13:55:46.028985 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1492db35-d6ea-4d34-b29a-6d5537694379-dns-swift-storage-0\") pod \"1492db35-d6ea-4d34-b29a-6d5537694379\" (UID: \"1492db35-d6ea-4d34-b29a-6d5537694379\") " Feb 16 13:55:46 crc kubenswrapper[4812]: I0216 13:55:46.029106 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vn2ld\" (UniqueName: \"kubernetes.io/projected/1492db35-d6ea-4d34-b29a-6d5537694379-kube-api-access-vn2ld\") pod \"1492db35-d6ea-4d34-b29a-6d5537694379\" (UID: \"1492db35-d6ea-4d34-b29a-6d5537694379\") " Feb 16 13:55:46 crc kubenswrapper[4812]: I0216 13:55:46.029138 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1492db35-d6ea-4d34-b29a-6d5537694379-ovsdbserver-nb\") pod \"1492db35-d6ea-4d34-b29a-6d5537694379\" (UID: \"1492db35-d6ea-4d34-b29a-6d5537694379\") " Feb 16 13:55:46 crc kubenswrapper[4812]: I0216 13:55:46.029262 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1492db35-d6ea-4d34-b29a-6d5537694379-config\") pod \"1492db35-d6ea-4d34-b29a-6d5537694379\" (UID: \"1492db35-d6ea-4d34-b29a-6d5537694379\") " Feb 16 13:55:46 crc kubenswrapper[4812]: I0216 13:55:46.070798 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1492db35-d6ea-4d34-b29a-6d5537694379-kube-api-access-vn2ld" (OuterVolumeSpecName: "kube-api-access-vn2ld") pod "1492db35-d6ea-4d34-b29a-6d5537694379" (UID: "1492db35-d6ea-4d34-b29a-6d5537694379"). InnerVolumeSpecName "kube-api-access-vn2ld". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:55:46 crc kubenswrapper[4812]: I0216 13:55:46.133798 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vn2ld\" (UniqueName: \"kubernetes.io/projected/1492db35-d6ea-4d34-b29a-6d5537694379-kube-api-access-vn2ld\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:46 crc kubenswrapper[4812]: I0216 13:55:46.404921 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1492db35-d6ea-4d34-b29a-6d5537694379-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1492db35-d6ea-4d34-b29a-6d5537694379" (UID: "1492db35-d6ea-4d34-b29a-6d5537694379"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:55:46 crc kubenswrapper[4812]: I0216 13:55:46.405760 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1492db35-d6ea-4d34-b29a-6d5537694379-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1492db35-d6ea-4d34-b29a-6d5537694379" (UID: "1492db35-d6ea-4d34-b29a-6d5537694379"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:55:46 crc kubenswrapper[4812]: I0216 13:55:46.458344 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1492db35-d6ea-4d34-b29a-6d5537694379-config" (OuterVolumeSpecName: "config") pod "1492db35-d6ea-4d34-b29a-6d5537694379" (UID: "1492db35-d6ea-4d34-b29a-6d5537694379"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:55:46 crc kubenswrapper[4812]: I0216 13:55:46.482970 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1492db35-d6ea-4d34-b29a-6d5537694379-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1492db35-d6ea-4d34-b29a-6d5537694379" (UID: "1492db35-d6ea-4d34-b29a-6d5537694379"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:55:46 crc kubenswrapper[4812]: I0216 13:55:46.496697 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1492db35-d6ea-4d34-b29a-6d5537694379-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:46 crc kubenswrapper[4812]: I0216 13:55:46.496739 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1492db35-d6ea-4d34-b29a-6d5537694379-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:46 crc kubenswrapper[4812]: I0216 13:55:46.496751 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1492db35-d6ea-4d34-b29a-6d5537694379-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:46 crc kubenswrapper[4812]: I0216 13:55:46.496763 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1492db35-d6ea-4d34-b29a-6d5537694379-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:46 crc kubenswrapper[4812]: I0216 13:55:46.531985 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1492db35-d6ea-4d34-b29a-6d5537694379-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1492db35-d6ea-4d34-b29a-6d5537694379" (UID: "1492db35-d6ea-4d34-b29a-6d5537694379"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:55:46 crc kubenswrapper[4812]: I0216 13:55:46.601005 4812 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1492db35-d6ea-4d34-b29a-6d5537694379-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:47 crc kubenswrapper[4812]: I0216 13:55:47.076766 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-5687b6b775-mt8dp" podUID="9c203d1a-c01d-4dda-889c-4a09ea0c616c" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 13:55:47 crc kubenswrapper[4812]: I0216 13:55:47.105563 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-5687b6b775-mt8dp" podUID="9c203d1a-c01d-4dda-889c-4a09ea0c616c" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 13:55:47 crc kubenswrapper[4812]: I0216 13:55:47.136124 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-5687b6b775-mt8dp" podUID="9c203d1a-c01d-4dda-889c-4a09ea0c616c" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 13:55:47 crc kubenswrapper[4812]: I0216 13:55:47.166766 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-4ww9m" event={"ID":"1492db35-d6ea-4d34-b29a-6d5537694379","Type":"ContainerDied","Data":"a1c57c40f93cbd35027dc114e79003f8b83eac8329e5da689399e995b21d7e9c"} Feb 16 13:55:47 crc kubenswrapper[4812]: I0216 13:55:47.166872 4812 scope.go:117] "RemoveContainer" containerID="af14d5bffce96d7b25e41a622cb10a5e6fc0c537475ff4731e5e80f15fc12bd1" Feb 16 13:55:47 crc kubenswrapper[4812]: I0216 13:55:47.167178 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-4ww9m" Feb 16 13:55:47 crc kubenswrapper[4812]: I0216 13:55:47.226487 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"73d33b57-0c02-4e05-b1a2-0d3075385bd4","Type":"ContainerStarted","Data":"e02e405f94711f2311ff05ca2a6602f392bcadfd9ff623d157407611742c16af"} Feb 16 13:55:47 crc kubenswrapper[4812]: I0216 13:55:47.267698 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-4ww9m"] Feb 16 13:55:47 crc kubenswrapper[4812]: I0216 13:55:47.328770 4812 scope.go:117] "RemoveContainer" containerID="5a37034d0ae91cb0324bc6faf92a48d37b17b881d4cb75fd178bd25d180e1fc8" Feb 16 13:55:47 crc kubenswrapper[4812]: I0216 13:55:47.351421 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-4ww9m"] Feb 16 13:55:47 crc kubenswrapper[4812]: I0216 13:55:47.366198 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:47 crc kubenswrapper[4812]: I0216 13:55:47.904617 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1492db35-d6ea-4d34-b29a-6d5537694379" path="/var/lib/kubelet/pods/1492db35-d6ea-4d34-b29a-6d5537694379/volumes" Feb 16 13:55:48 crc kubenswrapper[4812]: I0216 13:55:48.299925 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"73d33b57-0c02-4e05-b1a2-0d3075385bd4","Type":"ContainerStarted","Data":"f4727274c0c807562754faecef0d29692d376b3c4ffc98602412f4c5c40acdc0"} Feb 16 13:55:48 crc kubenswrapper[4812]: I0216 13:55:48.303363 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 16 13:55:48 crc kubenswrapper[4812]: I0216 13:55:48.376362 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.376316289 podStartE2EDuration="6.376316289s" podCreationTimestamp="2026-02-16 13:55:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:55:48.346943376 +0000 UTC m=+1437.411274087" watchObservedRunningTime="2026-02-16 13:55:48.376316289 +0000 UTC m=+1437.440646990" Feb 16 13:55:48 crc kubenswrapper[4812]: I0216 13:55:48.699097 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 16 13:55:48 crc kubenswrapper[4812]: E0216 13:55:48.700008 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1492db35-d6ea-4d34-b29a-6d5537694379" containerName="init" Feb 16 13:55:48 crc kubenswrapper[4812]: I0216 13:55:48.700044 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1492db35-d6ea-4d34-b29a-6d5537694379" containerName="init" Feb 16 13:55:48 crc kubenswrapper[4812]: E0216 13:55:48.700084 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1492db35-d6ea-4d34-b29a-6d5537694379" containerName="dnsmasq-dns" Feb 16 13:55:48 crc kubenswrapper[4812]: I0216 13:55:48.700095 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1492db35-d6ea-4d34-b29a-6d5537694379" containerName="dnsmasq-dns" Feb 16 13:55:48 crc kubenswrapper[4812]: I0216 13:55:48.700379 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1492db35-d6ea-4d34-b29a-6d5537694379" containerName="dnsmasq-dns" Feb 16 13:55:48 crc kubenswrapper[4812]: I0216 13:55:48.705830 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 13:55:48 crc kubenswrapper[4812]: I0216 13:55:48.724073 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 16 13:55:48 crc kubenswrapper[4812]: I0216 13:55:48.724163 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 16 13:55:48 crc kubenswrapper[4812]: I0216 13:55:48.724289 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-58kpf" Feb 16 13:55:48 crc kubenswrapper[4812]: I0216 13:55:48.749980 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 16 13:55:48 crc kubenswrapper[4812]: I0216 13:55:48.840701 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/528da5b1-5cfd-42dd-bfaf-ad82eb579d97-openstack-config\") pod \"openstackclient\" (UID: \"528da5b1-5cfd-42dd-bfaf-ad82eb579d97\") " pod="openstack/openstackclient" Feb 16 13:55:48 crc kubenswrapper[4812]: I0216 13:55:48.840783 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/528da5b1-5cfd-42dd-bfaf-ad82eb579d97-openstack-config-secret\") pod \"openstackclient\" (UID: \"528da5b1-5cfd-42dd-bfaf-ad82eb579d97\") " pod="openstack/openstackclient" Feb 16 13:55:48 crc kubenswrapper[4812]: I0216 13:55:48.841053 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/528da5b1-5cfd-42dd-bfaf-ad82eb579d97-combined-ca-bundle\") pod \"openstackclient\" (UID: \"528da5b1-5cfd-42dd-bfaf-ad82eb579d97\") " pod="openstack/openstackclient" Feb 16 13:55:48 crc kubenswrapper[4812]: I0216 13:55:48.841627 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4p9n\" (UniqueName: \"kubernetes.io/projected/528da5b1-5cfd-42dd-bfaf-ad82eb579d97-kube-api-access-v4p9n\") pod \"openstackclient\" (UID: \"528da5b1-5cfd-42dd-bfaf-ad82eb579d97\") " pod="openstack/openstackclient" Feb 16 13:55:49 crc kubenswrapper[4812]: I0216 13:55:49.075363 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4p9n\" (UniqueName: \"kubernetes.io/projected/528da5b1-5cfd-42dd-bfaf-ad82eb579d97-kube-api-access-v4p9n\") pod \"openstackclient\" (UID: \"528da5b1-5cfd-42dd-bfaf-ad82eb579d97\") " pod="openstack/openstackclient" Feb 16 13:55:49 crc kubenswrapper[4812]: I0216 13:55:49.079960 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/528da5b1-5cfd-42dd-bfaf-ad82eb579d97-openstack-config\") pod \"openstackclient\" (UID: \"528da5b1-5cfd-42dd-bfaf-ad82eb579d97\") " pod="openstack/openstackclient" Feb 16 13:55:49 crc kubenswrapper[4812]: I0216 13:55:49.080028 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/528da5b1-5cfd-42dd-bfaf-ad82eb579d97-openstack-config-secret\") pod \"openstackclient\" (UID: \"528da5b1-5cfd-42dd-bfaf-ad82eb579d97\") " pod="openstack/openstackclient" Feb 16 13:55:49 crc kubenswrapper[4812]: I0216 13:55:49.080112 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/528da5b1-5cfd-42dd-bfaf-ad82eb579d97-combined-ca-bundle\") pod \"openstackclient\" (UID: \"528da5b1-5cfd-42dd-bfaf-ad82eb579d97\") " pod="openstack/openstackclient" Feb 16 13:55:49 crc kubenswrapper[4812]: I0216 13:55:49.084664 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/528da5b1-5cfd-42dd-bfaf-ad82eb579d97-openstack-config\") pod \"openstackclient\" (UID: \"528da5b1-5cfd-42dd-bfaf-ad82eb579d97\") " pod="openstack/openstackclient" Feb 16 13:55:49 crc kubenswrapper[4812]: I0216 13:55:49.094362 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/528da5b1-5cfd-42dd-bfaf-ad82eb579d97-openstack-config-secret\") pod \"openstackclient\" (UID: \"528da5b1-5cfd-42dd-bfaf-ad82eb579d97\") " pod="openstack/openstackclient" Feb 16 13:55:49 crc kubenswrapper[4812]: I0216 13:55:49.106974 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/528da5b1-5cfd-42dd-bfaf-ad82eb579d97-combined-ca-bundle\") pod \"openstackclient\" (UID: \"528da5b1-5cfd-42dd-bfaf-ad82eb579d97\") " pod="openstack/openstackclient" Feb 16 13:55:49 crc kubenswrapper[4812]: I0216 13:55:49.120278 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4p9n\" (UniqueName: \"kubernetes.io/projected/528da5b1-5cfd-42dd-bfaf-ad82eb579d97-kube-api-access-v4p9n\") pod \"openstackclient\" (UID: \"528da5b1-5cfd-42dd-bfaf-ad82eb579d97\") " pod="openstack/openstackclient" Feb 16 13:55:49 crc kubenswrapper[4812]: I0216 13:55:49.344302 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 13:55:49 crc kubenswrapper[4812]: I0216 13:55:49.387793 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-775b4c67cd-9n6f8" podUID="938fb099-5861-4d4f-8105-bcd26cbbcabd" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.183:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 13:55:49 crc kubenswrapper[4812]: I0216 13:55:49.434124 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-775b4c67cd-9n6f8" podUID="938fb099-5861-4d4f-8105-bcd26cbbcabd" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.183:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 13:55:49 crc kubenswrapper[4812]: I0216 13:55:49.465156 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-775b4c67cd-9n6f8" Feb 16 13:55:49 crc kubenswrapper[4812]: I0216 13:55:49.943548 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 16 13:55:50 crc kubenswrapper[4812]: I0216 13:55:50.096726 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 13:55:50 crc kubenswrapper[4812]: I0216 13:55:50.426092 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55f844cf75-4ww9m" podUID="1492db35-d6ea-4d34-b29a-6d5537694379" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.168:5353: i/o timeout" Feb 16 13:55:50 crc kubenswrapper[4812]: I0216 13:55:50.434062 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="4f2f2975-b4e8-4a6e-8a42-4eaeca03595a" containerName="cinder-scheduler" containerID="cri-o://d3b7780a900fdcea6a91e94c527cbab05a8a72f3d47ea137d55344ae7b997b60" gracePeriod=30 Feb 16 13:55:50 crc kubenswrapper[4812]: I0216 13:55:50.434694 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="4f2f2975-b4e8-4a6e-8a42-4eaeca03595a" containerName="probe" containerID="cri-o://38ca181d57564e3d6ad706c51a9193d16d41a76b7857f3e4ebaba369b407f564" gracePeriod=30 Feb 16 13:55:50 crc kubenswrapper[4812]: I0216 13:55:50.509067 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 16 13:55:50 crc kubenswrapper[4812]: I0216 13:55:50.576746 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-dd87694f4-8qsk9" podUID="c51849be-b016-41a0-9959-654f56fd10c2" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.187:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 13:55:50 crc kubenswrapper[4812]: I0216 13:55:50.577264 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-dd87694f4-8qsk9" podUID="c51849be-b016-41a0-9959-654f56fd10c2" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.187:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 13:55:51 crc kubenswrapper[4812]: I0216 13:55:51.500155 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"528da5b1-5cfd-42dd-bfaf-ad82eb579d97","Type":"ContainerStarted","Data":"a5a2e738096cc5e1e9db0e7e277ac30823d9b0d9be8a4a0971ff533e058c138a"} Feb 16 13:55:51 crc kubenswrapper[4812]: I0216 13:55:51.615002 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-dd87694f4-8qsk9" podUID="c51849be-b016-41a0-9959-654f56fd10c2" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.187:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 13:55:51 crc kubenswrapper[4812]: I0216 13:55:51.615956 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-dd87694f4-8qsk9" podUID="c51849be-b016-41a0-9959-654f56fd10c2" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.187:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 13:55:52 crc kubenswrapper[4812]: I0216 13:55:52.366428 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:52 crc kubenswrapper[4812]: I0216 13:55:52.376461 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:52 crc kubenswrapper[4812]: I0216 13:55:52.534852 4812 generic.go:334] "Generic (PLEG): container finished" podID="4f2f2975-b4e8-4a6e-8a42-4eaeca03595a" containerID="38ca181d57564e3d6ad706c51a9193d16d41a76b7857f3e4ebaba369b407f564" exitCode=0 Feb 16 13:55:52 crc kubenswrapper[4812]: I0216 13:55:52.536601 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4f2f2975-b4e8-4a6e-8a42-4eaeca03595a","Type":"ContainerDied","Data":"38ca181d57564e3d6ad706c51a9193d16d41a76b7857f3e4ebaba369b407f564"} Feb 16 13:55:52 crc kubenswrapper[4812]: I0216 13:55:52.551757 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 16 13:55:54 crc kubenswrapper[4812]: I0216 13:55:54.586738 4812 generic.go:334] "Generic (PLEG): container finished" podID="4f2f2975-b4e8-4a6e-8a42-4eaeca03595a" containerID="d3b7780a900fdcea6a91e94c527cbab05a8a72f3d47ea137d55344ae7b997b60" exitCode=0 Feb 16 13:55:54 crc kubenswrapper[4812]: I0216 13:55:54.587360 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4f2f2975-b4e8-4a6e-8a42-4eaeca03595a","Type":"ContainerDied","Data":"d3b7780a900fdcea6a91e94c527cbab05a8a72f3d47ea137d55344ae7b997b60"} Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.039839 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.152891 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4f2f2975-b4e8-4a6e-8a42-4eaeca03595a-config-data-custom\") pod \"4f2f2975-b4e8-4a6e-8a42-4eaeca03595a\" (UID: \"4f2f2975-b4e8-4a6e-8a42-4eaeca03595a\") " Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.153290 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f2f2975-b4e8-4a6e-8a42-4eaeca03595a-config-data\") pod \"4f2f2975-b4e8-4a6e-8a42-4eaeca03595a\" (UID: \"4f2f2975-b4e8-4a6e-8a42-4eaeca03595a\") " Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.153365 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpz8g\" (UniqueName: \"kubernetes.io/projected/4f2f2975-b4e8-4a6e-8a42-4eaeca03595a-kube-api-access-tpz8g\") pod \"4f2f2975-b4e8-4a6e-8a42-4eaeca03595a\" (UID: \"4f2f2975-b4e8-4a6e-8a42-4eaeca03595a\") " Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.153519 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f2f2975-b4e8-4a6e-8a42-4eaeca03595a-combined-ca-bundle\") pod \"4f2f2975-b4e8-4a6e-8a42-4eaeca03595a\" (UID: \"4f2f2975-b4e8-4a6e-8a42-4eaeca03595a\") " Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.153679 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f2f2975-b4e8-4a6e-8a42-4eaeca03595a-scripts\") pod \"4f2f2975-b4e8-4a6e-8a42-4eaeca03595a\" (UID: \"4f2f2975-b4e8-4a6e-8a42-4eaeca03595a\") " Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.153812 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4f2f2975-b4e8-4a6e-8a42-4eaeca03595a-etc-machine-id\") pod \"4f2f2975-b4e8-4a6e-8a42-4eaeca03595a\" (UID: \"4f2f2975-b4e8-4a6e-8a42-4eaeca03595a\") " Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.161901 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f2f2975-b4e8-4a6e-8a42-4eaeca03595a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4f2f2975-b4e8-4a6e-8a42-4eaeca03595a" (UID: "4f2f2975-b4e8-4a6e-8a42-4eaeca03595a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.321421 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f2f2975-b4e8-4a6e-8a42-4eaeca03595a-kube-api-access-tpz8g" (OuterVolumeSpecName: "kube-api-access-tpz8g") pod "4f2f2975-b4e8-4a6e-8a42-4eaeca03595a" (UID: "4f2f2975-b4e8-4a6e-8a42-4eaeca03595a"). InnerVolumeSpecName "kube-api-access-tpz8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.321616 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f2f2975-b4e8-4a6e-8a42-4eaeca03595a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4f2f2975-b4e8-4a6e-8a42-4eaeca03595a" (UID: "4f2f2975-b4e8-4a6e-8a42-4eaeca03595a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.329729 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f2f2975-b4e8-4a6e-8a42-4eaeca03595a-scripts" (OuterVolumeSpecName: "scripts") pod "4f2f2975-b4e8-4a6e-8a42-4eaeca03595a" (UID: "4f2f2975-b4e8-4a6e-8a42-4eaeca03595a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.339944 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f2f2975-b4e8-4a6e-8a42-4eaeca03595a-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.339999 4812 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4f2f2975-b4e8-4a6e-8a42-4eaeca03595a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.340015 4812 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4f2f2975-b4e8-4a6e-8a42-4eaeca03595a-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.340025 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpz8g\" (UniqueName: \"kubernetes.io/projected/4f2f2975-b4e8-4a6e-8a42-4eaeca03595a-kube-api-access-tpz8g\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.466491 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f2f2975-b4e8-4a6e-8a42-4eaeca03595a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f2f2975-b4e8-4a6e-8a42-4eaeca03595a" (UID: "4f2f2975-b4e8-4a6e-8a42-4eaeca03595a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.564078 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f2f2975-b4e8-4a6e-8a42-4eaeca03595a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.565015 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f2f2975-b4e8-4a6e-8a42-4eaeca03595a-config-data" (OuterVolumeSpecName: "config-data") pod "4f2f2975-b4e8-4a6e-8a42-4eaeca03595a" (UID: "4f2f2975-b4e8-4a6e-8a42-4eaeca03595a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.582176 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-dd87694f4-8qsk9" podUID="c51849be-b016-41a0-9959-654f56fd10c2" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.187:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.627687 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.630594 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4f2f2975-b4e8-4a6e-8a42-4eaeca03595a","Type":"ContainerDied","Data":"b71d9e07ae9e66442e62832aaba233ffc416ff67e983dccd905b6ba0ea9a5f49"} Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.630823 4812 scope.go:117] "RemoveContainer" containerID="38ca181d57564e3d6ad706c51a9193d16d41a76b7857f3e4ebaba369b407f564" Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.670023 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f2f2975-b4e8-4a6e-8a42-4eaeca03595a-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.694547 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.695746 4812 scope.go:117] "RemoveContainer" containerID="d3b7780a900fdcea6a91e94c527cbab05a8a72f3d47ea137d55344ae7b997b60" Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.709241 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.731238 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 13:55:55 crc kubenswrapper[4812]: E0216 13:55:55.731975 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f2f2975-b4e8-4a6e-8a42-4eaeca03595a" containerName="cinder-scheduler" Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.732005 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f2f2975-b4e8-4a6e-8a42-4eaeca03595a" containerName="cinder-scheduler" Feb 16 13:55:55 crc kubenswrapper[4812]: E0216 13:55:55.732059 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f2f2975-b4e8-4a6e-8a42-4eaeca03595a" containerName="probe" Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.732072 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f2f2975-b4e8-4a6e-8a42-4eaeca03595a" containerName="probe" Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.732365 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f2f2975-b4e8-4a6e-8a42-4eaeca03595a" containerName="probe" Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.732398 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f2f2975-b4e8-4a6e-8a42-4eaeca03595a" containerName="cinder-scheduler" Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.734672 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.743186 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.751252 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.833208 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-dd87694f4-8qsk9" Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.882967 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a822cac0-26cb-430a-8c4f-78d11b7451dd-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a822cac0-26cb-430a-8c4f-78d11b7451dd\") " pod="openstack/cinder-scheduler-0" Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.883077 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a822cac0-26cb-430a-8c4f-78d11b7451dd-config-data\") pod \"cinder-scheduler-0\" (UID: \"a822cac0-26cb-430a-8c4f-78d11b7451dd\") " pod="openstack/cinder-scheduler-0" Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.883296 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a822cac0-26cb-430a-8c4f-78d11b7451dd-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a822cac0-26cb-430a-8c4f-78d11b7451dd\") " pod="openstack/cinder-scheduler-0" Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.883359 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hbvp\" (UniqueName: \"kubernetes.io/projected/a822cac0-26cb-430a-8c4f-78d11b7451dd-kube-api-access-7hbvp\") pod \"cinder-scheduler-0\" (UID: \"a822cac0-26cb-430a-8c4f-78d11b7451dd\") " pod="openstack/cinder-scheduler-0" Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.883515 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a822cac0-26cb-430a-8c4f-78d11b7451dd-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a822cac0-26cb-430a-8c4f-78d11b7451dd\") " pod="openstack/cinder-scheduler-0" Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.883815 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a822cac0-26cb-430a-8c4f-78d11b7451dd-scripts\") pod \"cinder-scheduler-0\" (UID: \"a822cac0-26cb-430a-8c4f-78d11b7451dd\") " pod="openstack/cinder-scheduler-0" Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.927612 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f2f2975-b4e8-4a6e-8a42-4eaeca03595a" path="/var/lib/kubelet/pods/4f2f2975-b4e8-4a6e-8a42-4eaeca03595a/volumes" Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.989381 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a822cac0-26cb-430a-8c4f-78d11b7451dd-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a822cac0-26cb-430a-8c4f-78d11b7451dd\") " pod="openstack/cinder-scheduler-0" Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.989659 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a822cac0-26cb-430a-8c4f-78d11b7451dd-scripts\") pod \"cinder-scheduler-0\" (UID: \"a822cac0-26cb-430a-8c4f-78d11b7451dd\") " pod="openstack/cinder-scheduler-0" Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.989711 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a822cac0-26cb-430a-8c4f-78d11b7451dd-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a822cac0-26cb-430a-8c4f-78d11b7451dd\") " pod="openstack/cinder-scheduler-0" Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.989750 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a822cac0-26cb-430a-8c4f-78d11b7451dd-config-data\") pod \"cinder-scheduler-0\" (UID: \"a822cac0-26cb-430a-8c4f-78d11b7451dd\") " pod="openstack/cinder-scheduler-0" Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.989842 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a822cac0-26cb-430a-8c4f-78d11b7451dd-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a822cac0-26cb-430a-8c4f-78d11b7451dd\") " pod="openstack/cinder-scheduler-0" Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.989874 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hbvp\" (UniqueName: \"kubernetes.io/projected/a822cac0-26cb-430a-8c4f-78d11b7451dd-kube-api-access-7hbvp\") pod \"cinder-scheduler-0\" (UID: \"a822cac0-26cb-430a-8c4f-78d11b7451dd\") " pod="openstack/cinder-scheduler-0" Feb 16 13:55:55 crc kubenswrapper[4812]: I0216 13:55:55.990853 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a822cac0-26cb-430a-8c4f-78d11b7451dd-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a822cac0-26cb-430a-8c4f-78d11b7451dd\") " pod="openstack/cinder-scheduler-0" Feb 16 13:55:56 crc kubenswrapper[4812]: I0216 13:55:55.999278 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a822cac0-26cb-430a-8c4f-78d11b7451dd-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a822cac0-26cb-430a-8c4f-78d11b7451dd\") " pod="openstack/cinder-scheduler-0" Feb 16 13:55:56 crc kubenswrapper[4812]: I0216 13:55:56.004310 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a822cac0-26cb-430a-8c4f-78d11b7451dd-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a822cac0-26cb-430a-8c4f-78d11b7451dd\") " pod="openstack/cinder-scheduler-0" Feb 16 13:55:56 crc kubenswrapper[4812]: I0216 13:55:56.010527 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a822cac0-26cb-430a-8c4f-78d11b7451dd-scripts\") pod \"cinder-scheduler-0\" (UID: \"a822cac0-26cb-430a-8c4f-78d11b7451dd\") " pod="openstack/cinder-scheduler-0" Feb 16 13:55:56 crc kubenswrapper[4812]: I0216 13:55:56.020524 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a822cac0-26cb-430a-8c4f-78d11b7451dd-config-data\") pod \"cinder-scheduler-0\" (UID: \"a822cac0-26cb-430a-8c4f-78d11b7451dd\") " pod="openstack/cinder-scheduler-0" Feb 16 13:55:56 crc kubenswrapper[4812]: I0216 13:55:56.039464 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hbvp\" (UniqueName: \"kubernetes.io/projected/a822cac0-26cb-430a-8c4f-78d11b7451dd-kube-api-access-7hbvp\") pod \"cinder-scheduler-0\" (UID: \"a822cac0-26cb-430a-8c4f-78d11b7451dd\") " pod="openstack/cinder-scheduler-0" Feb 16 13:55:56 crc kubenswrapper[4812]: I0216 13:55:56.077875 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 13:55:56 crc kubenswrapper[4812]: I0216 13:55:56.626038 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-dd87694f4-8qsk9" podUID="c51849be-b016-41a0-9959-654f56fd10c2" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.187:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 13:55:56 crc kubenswrapper[4812]: I0216 13:55:56.664101 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-dd87694f4-8qsk9" Feb 16 13:55:56 crc kubenswrapper[4812]: I0216 13:55:56.801748 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-775b4c67cd-9n6f8"] Feb 16 13:55:56 crc kubenswrapper[4812]: I0216 13:55:56.804871 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-775b4c67cd-9n6f8" podUID="938fb099-5861-4d4f-8105-bcd26cbbcabd" containerName="barbican-api" containerID="cri-o://99df326f5bca081a2c902aa913c70c7ca3924a5448a2a5ff7018d960f3d48b33" gracePeriod=30 Feb 16 13:55:56 crc kubenswrapper[4812]: I0216 13:55:56.802838 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-775b4c67cd-9n6f8" podUID="938fb099-5861-4d4f-8105-bcd26cbbcabd" containerName="barbican-api-log" containerID="cri-o://6a8312078a00af4cd140ab5529cf65ddfed82d96b8d60bbab70197276af0dc09" gracePeriod=30 Feb 16 13:55:56 crc kubenswrapper[4812]: I0216 13:55:56.995791 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 13:55:57 crc kubenswrapper[4812]: I0216 13:55:57.782661 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a822cac0-26cb-430a-8c4f-78d11b7451dd","Type":"ContainerStarted","Data":"8804f1c24d99fa918b72a48346c66576e917e3afa06233de413ba263ac896c10"} Feb 16 13:55:57 crc kubenswrapper[4812]: I0216 13:55:57.792995 4812 generic.go:334] "Generic (PLEG): container finished" podID="938fb099-5861-4d4f-8105-bcd26cbbcabd" containerID="6a8312078a00af4cd140ab5529cf65ddfed82d96b8d60bbab70197276af0dc09" exitCode=143 Feb 16 13:55:57 crc kubenswrapper[4812]: I0216 13:55:57.793111 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-775b4c67cd-9n6f8" event={"ID":"938fb099-5861-4d4f-8105-bcd26cbbcabd","Type":"ContainerDied","Data":"6a8312078a00af4cd140ab5529cf65ddfed82d96b8d60bbab70197276af0dc09"} Feb 16 13:55:57 crc kubenswrapper[4812]: E0216 13:55:57.899676 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 13:55:58 crc kubenswrapper[4812]: I0216 13:55:58.207857 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="73d33b57-0c02-4e05-b1a2-0d3075385bd4" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.189:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 13:55:58 crc kubenswrapper[4812]: I0216 13:55:58.871734 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a822cac0-26cb-430a-8c4f-78d11b7451dd","Type":"ContainerStarted","Data":"4378dd414a4a7dc8a3309b69bb08025bfba208448e4e7f38ea981839b5c2a313"} Feb 16 13:55:59 crc kubenswrapper[4812]: I0216 13:55:59.137092 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6d67c77f6c-gcgq7"] Feb 16 13:55:59 crc kubenswrapper[4812]: I0216 13:55:59.155643 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6d67c77f6c-gcgq7" Feb 16 13:55:59 crc kubenswrapper[4812]: I0216 13:55:59.168012 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 16 13:55:59 crc kubenswrapper[4812]: I0216 13:55:59.181801 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 16 13:55:59 crc kubenswrapper[4812]: I0216 13:55:59.185417 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 16 13:55:59 crc kubenswrapper[4812]: I0216 13:55:59.185408 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6d67c77f6c-gcgq7"] Feb 16 13:55:59 crc kubenswrapper[4812]: I0216 13:55:59.347330 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49cfe7b6-0403-4fae-8c40-9fdec91bceee-log-httpd\") pod \"swift-proxy-6d67c77f6c-gcgq7\" (UID: \"49cfe7b6-0403-4fae-8c40-9fdec91bceee\") " pod="openstack/swift-proxy-6d67c77f6c-gcgq7" Feb 16 13:55:59 crc kubenswrapper[4812]: I0216 13:55:59.348636 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49cfe7b6-0403-4fae-8c40-9fdec91bceee-run-httpd\") pod \"swift-proxy-6d67c77f6c-gcgq7\" (UID: \"49cfe7b6-0403-4fae-8c40-9fdec91bceee\") " pod="openstack/swift-proxy-6d67c77f6c-gcgq7" Feb 16 13:55:59 crc kubenswrapper[4812]: I0216 13:55:59.348742 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/49cfe7b6-0403-4fae-8c40-9fdec91bceee-etc-swift\") pod \"swift-proxy-6d67c77f6c-gcgq7\" (UID: \"49cfe7b6-0403-4fae-8c40-9fdec91bceee\") " pod="openstack/swift-proxy-6d67c77f6c-gcgq7" Feb 16 13:55:59 crc kubenswrapper[4812]: I0216 13:55:59.348819 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49cfe7b6-0403-4fae-8c40-9fdec91bceee-config-data\") pod \"swift-proxy-6d67c77f6c-gcgq7\" (UID: \"49cfe7b6-0403-4fae-8c40-9fdec91bceee\") " pod="openstack/swift-proxy-6d67c77f6c-gcgq7" Feb 16 13:55:59 crc kubenswrapper[4812]: I0216 13:55:59.348925 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49cfe7b6-0403-4fae-8c40-9fdec91bceee-combined-ca-bundle\") pod \"swift-proxy-6d67c77f6c-gcgq7\" (UID: \"49cfe7b6-0403-4fae-8c40-9fdec91bceee\") " pod="openstack/swift-proxy-6d67c77f6c-gcgq7" Feb 16 13:55:59 crc kubenswrapper[4812]: I0216 13:55:59.349041 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49cfe7b6-0403-4fae-8c40-9fdec91bceee-internal-tls-certs\") pod \"swift-proxy-6d67c77f6c-gcgq7\" (UID: \"49cfe7b6-0403-4fae-8c40-9fdec91bceee\") " pod="openstack/swift-proxy-6d67c77f6c-gcgq7" Feb 16 13:55:59 crc kubenswrapper[4812]: I0216 13:55:59.349188 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttgdv\" (UniqueName: \"kubernetes.io/projected/49cfe7b6-0403-4fae-8c40-9fdec91bceee-kube-api-access-ttgdv\") pod \"swift-proxy-6d67c77f6c-gcgq7\" (UID: \"49cfe7b6-0403-4fae-8c40-9fdec91bceee\") " pod="openstack/swift-proxy-6d67c77f6c-gcgq7" Feb 16 13:55:59 crc kubenswrapper[4812]: I0216 13:55:59.349235 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49cfe7b6-0403-4fae-8c40-9fdec91bceee-public-tls-certs\") pod \"swift-proxy-6d67c77f6c-gcgq7\" (UID: \"49cfe7b6-0403-4fae-8c40-9fdec91bceee\") " pod="openstack/swift-proxy-6d67c77f6c-gcgq7" Feb 16 13:55:59 crc kubenswrapper[4812]: I0216 13:55:59.452673 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttgdv\" (UniqueName: \"kubernetes.io/projected/49cfe7b6-0403-4fae-8c40-9fdec91bceee-kube-api-access-ttgdv\") pod \"swift-proxy-6d67c77f6c-gcgq7\" (UID: \"49cfe7b6-0403-4fae-8c40-9fdec91bceee\") " pod="openstack/swift-proxy-6d67c77f6c-gcgq7" Feb 16 13:55:59 crc kubenswrapper[4812]: I0216 13:55:59.452923 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49cfe7b6-0403-4fae-8c40-9fdec91bceee-public-tls-certs\") pod \"swift-proxy-6d67c77f6c-gcgq7\" (UID: \"49cfe7b6-0403-4fae-8c40-9fdec91bceee\") " pod="openstack/swift-proxy-6d67c77f6c-gcgq7" Feb 16 13:55:59 crc kubenswrapper[4812]: I0216 13:55:59.453282 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49cfe7b6-0403-4fae-8c40-9fdec91bceee-log-httpd\") pod \"swift-proxy-6d67c77f6c-gcgq7\" (UID: \"49cfe7b6-0403-4fae-8c40-9fdec91bceee\") " pod="openstack/swift-proxy-6d67c77f6c-gcgq7" Feb 16 13:55:59 crc kubenswrapper[4812]: I0216 13:55:59.453363 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49cfe7b6-0403-4fae-8c40-9fdec91bceee-run-httpd\") pod \"swift-proxy-6d67c77f6c-gcgq7\" (UID: \"49cfe7b6-0403-4fae-8c40-9fdec91bceee\") " pod="openstack/swift-proxy-6d67c77f6c-gcgq7" Feb 16 13:55:59 crc kubenswrapper[4812]: I0216 13:55:59.458702 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49cfe7b6-0403-4fae-8c40-9fdec91bceee-log-httpd\") pod \"swift-proxy-6d67c77f6c-gcgq7\" (UID: \"49cfe7b6-0403-4fae-8c40-9fdec91bceee\") " pod="openstack/swift-proxy-6d67c77f6c-gcgq7" Feb 16 13:55:59 crc kubenswrapper[4812]: I0216 13:55:59.458792 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49cfe7b6-0403-4fae-8c40-9fdec91bceee-run-httpd\") pod \"swift-proxy-6d67c77f6c-gcgq7\" (UID: \"49cfe7b6-0403-4fae-8c40-9fdec91bceee\") " pod="openstack/swift-proxy-6d67c77f6c-gcgq7" Feb 16 13:55:59 crc kubenswrapper[4812]: I0216 13:55:59.462665 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/49cfe7b6-0403-4fae-8c40-9fdec91bceee-etc-swift\") pod \"swift-proxy-6d67c77f6c-gcgq7\" (UID: \"49cfe7b6-0403-4fae-8c40-9fdec91bceee\") " pod="openstack/swift-proxy-6d67c77f6c-gcgq7" Feb 16 13:55:59 crc kubenswrapper[4812]: I0216 13:55:59.462852 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49cfe7b6-0403-4fae-8c40-9fdec91bceee-config-data\") pod \"swift-proxy-6d67c77f6c-gcgq7\" (UID: \"49cfe7b6-0403-4fae-8c40-9fdec91bceee\") " pod="openstack/swift-proxy-6d67c77f6c-gcgq7" Feb 16 13:55:59 crc kubenswrapper[4812]: I0216 13:55:59.463066 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49cfe7b6-0403-4fae-8c40-9fdec91bceee-combined-ca-bundle\") pod \"swift-proxy-6d67c77f6c-gcgq7\" (UID: \"49cfe7b6-0403-4fae-8c40-9fdec91bceee\") " pod="openstack/swift-proxy-6d67c77f6c-gcgq7" Feb 16 13:55:59 crc kubenswrapper[4812]: I0216 13:55:59.463239 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49cfe7b6-0403-4fae-8c40-9fdec91bceee-internal-tls-certs\") pod \"swift-proxy-6d67c77f6c-gcgq7\" (UID: \"49cfe7b6-0403-4fae-8c40-9fdec91bceee\") " pod="openstack/swift-proxy-6d67c77f6c-gcgq7" Feb 16 13:55:59 crc kubenswrapper[4812]: I0216 13:55:59.490760 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49cfe7b6-0403-4fae-8c40-9fdec91bceee-combined-ca-bundle\") pod \"swift-proxy-6d67c77f6c-gcgq7\" (UID: \"49cfe7b6-0403-4fae-8c40-9fdec91bceee\") " pod="openstack/swift-proxy-6d67c77f6c-gcgq7" Feb 16 13:55:59 crc kubenswrapper[4812]: I0216 13:55:59.494086 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49cfe7b6-0403-4fae-8c40-9fdec91bceee-public-tls-certs\") pod \"swift-proxy-6d67c77f6c-gcgq7\" (UID: \"49cfe7b6-0403-4fae-8c40-9fdec91bceee\") " pod="openstack/swift-proxy-6d67c77f6c-gcgq7" Feb 16 13:55:59 crc kubenswrapper[4812]: I0216 13:55:59.496861 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49cfe7b6-0403-4fae-8c40-9fdec91bceee-internal-tls-certs\") pod \"swift-proxy-6d67c77f6c-gcgq7\" (UID: \"49cfe7b6-0403-4fae-8c40-9fdec91bceee\") " pod="openstack/swift-proxy-6d67c77f6c-gcgq7" Feb 16 13:55:59 crc kubenswrapper[4812]: I0216 13:55:59.499198 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/49cfe7b6-0403-4fae-8c40-9fdec91bceee-etc-swift\") pod \"swift-proxy-6d67c77f6c-gcgq7\" (UID: \"49cfe7b6-0403-4fae-8c40-9fdec91bceee\") " pod="openstack/swift-proxy-6d67c77f6c-gcgq7" Feb 16 13:55:59 crc kubenswrapper[4812]: I0216 13:55:59.500369 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49cfe7b6-0403-4fae-8c40-9fdec91bceee-config-data\") pod \"swift-proxy-6d67c77f6c-gcgq7\" (UID: \"49cfe7b6-0403-4fae-8c40-9fdec91bceee\") " pod="openstack/swift-proxy-6d67c77f6c-gcgq7" Feb 16 13:55:59 crc kubenswrapper[4812]: I0216 13:55:59.509520 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttgdv\" (UniqueName: \"kubernetes.io/projected/49cfe7b6-0403-4fae-8c40-9fdec91bceee-kube-api-access-ttgdv\") pod \"swift-proxy-6d67c77f6c-gcgq7\" (UID: \"49cfe7b6-0403-4fae-8c40-9fdec91bceee\") " pod="openstack/swift-proxy-6d67c77f6c-gcgq7" Feb 16 13:55:59 crc kubenswrapper[4812]: I0216 13:55:59.537354 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6d67c77f6c-gcgq7" Feb 16 13:55:59 crc kubenswrapper[4812]: I0216 13:55:59.943794 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a822cac0-26cb-430a-8c4f-78d11b7451dd","Type":"ContainerStarted","Data":"e67ebb7ce5a8d3f36ab6495f071f32232bab7b227cec64da61df0739980c82fc"} Feb 16 13:56:00 crc kubenswrapper[4812]: I0216 13:56:00.056874 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.056832124 podStartE2EDuration="5.056832124s" podCreationTimestamp="2026-02-16 13:55:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:55:59.983140682 +0000 UTC m=+1449.047471383" watchObservedRunningTime="2026-02-16 13:56:00.056832124 +0000 UTC m=+1449.121162825" Feb 16 13:56:00 crc kubenswrapper[4812]: I0216 13:56:00.333769 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="73d33b57-0c02-4e05-b1a2-0d3075385bd4" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.189:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 13:56:00 crc kubenswrapper[4812]: I0216 13:56:00.494837 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6d67c77f6c-gcgq7"] Feb 16 13:56:00 crc kubenswrapper[4812]: I0216 13:56:00.602699 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 16 13:56:00 crc kubenswrapper[4812]: I0216 13:56:00.752703 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-775b4c67cd-9n6f8" podUID="938fb099-5861-4d4f-8105-bcd26cbbcabd" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.183:9311/healthcheck\": read tcp 10.217.0.2:38080->10.217.0.183:9311: read: connection reset by peer" Feb 16 13:56:00 crc kubenswrapper[4812]: I0216 13:56:00.753063 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-775b4c67cd-9n6f8" podUID="938fb099-5861-4d4f-8105-bcd26cbbcabd" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.183:9311/healthcheck\": read tcp 10.217.0.2:38074->10.217.0.183:9311: read: connection reset by peer" Feb 16 13:56:00 crc kubenswrapper[4812]: I0216 13:56:00.980269 4812 generic.go:334] "Generic (PLEG): container finished" podID="938fb099-5861-4d4f-8105-bcd26cbbcabd" containerID="99df326f5bca081a2c902aa913c70c7ca3924a5448a2a5ff7018d960f3d48b33" exitCode=0 Feb 16 13:56:00 crc kubenswrapper[4812]: I0216 13:56:00.980892 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-775b4c67cd-9n6f8" event={"ID":"938fb099-5861-4d4f-8105-bcd26cbbcabd","Type":"ContainerDied","Data":"99df326f5bca081a2c902aa913c70c7ca3924a5448a2a5ff7018d960f3d48b33"} Feb 16 13:56:00 crc kubenswrapper[4812]: I0216 13:56:00.991612 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d67c77f6c-gcgq7" event={"ID":"49cfe7b6-0403-4fae-8c40-9fdec91bceee","Type":"ContainerStarted","Data":"125dfa14d02c10e22bd88c4ff5055651cfaed0045d6d8df97d493e97f94149d0"} Feb 16 13:56:01 crc kubenswrapper[4812]: I0216 13:56:01.084908 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 13:56:01 crc kubenswrapper[4812]: I0216 13:56:01.491033 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-775b4c67cd-9n6f8" Feb 16 13:56:01 crc kubenswrapper[4812]: I0216 13:56:01.589387 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sprc8\" (UniqueName: \"kubernetes.io/projected/938fb099-5861-4d4f-8105-bcd26cbbcabd-kube-api-access-sprc8\") pod \"938fb099-5861-4d4f-8105-bcd26cbbcabd\" (UID: \"938fb099-5861-4d4f-8105-bcd26cbbcabd\") " Feb 16 13:56:01 crc kubenswrapper[4812]: I0216 13:56:01.589565 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/938fb099-5861-4d4f-8105-bcd26cbbcabd-logs\") pod \"938fb099-5861-4d4f-8105-bcd26cbbcabd\" (UID: \"938fb099-5861-4d4f-8105-bcd26cbbcabd\") " Feb 16 13:56:01 crc kubenswrapper[4812]: I0216 13:56:01.589762 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/938fb099-5861-4d4f-8105-bcd26cbbcabd-config-data-custom\") pod \"938fb099-5861-4d4f-8105-bcd26cbbcabd\" (UID: \"938fb099-5861-4d4f-8105-bcd26cbbcabd\") " Feb 16 13:56:01 crc kubenswrapper[4812]: I0216 13:56:01.590764 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/938fb099-5861-4d4f-8105-bcd26cbbcabd-logs" (OuterVolumeSpecName: "logs") pod "938fb099-5861-4d4f-8105-bcd26cbbcabd" (UID: "938fb099-5861-4d4f-8105-bcd26cbbcabd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:56:01 crc kubenswrapper[4812]: I0216 13:56:01.591316 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/938fb099-5861-4d4f-8105-bcd26cbbcabd-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:01 crc kubenswrapper[4812]: I0216 13:56:01.599830 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/938fb099-5861-4d4f-8105-bcd26cbbcabd-kube-api-access-sprc8" (OuterVolumeSpecName: "kube-api-access-sprc8") pod "938fb099-5861-4d4f-8105-bcd26cbbcabd" (UID: "938fb099-5861-4d4f-8105-bcd26cbbcabd"). InnerVolumeSpecName "kube-api-access-sprc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:56:01 crc kubenswrapper[4812]: I0216 13:56:01.608716 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/938fb099-5861-4d4f-8105-bcd26cbbcabd-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "938fb099-5861-4d4f-8105-bcd26cbbcabd" (UID: "938fb099-5861-4d4f-8105-bcd26cbbcabd"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:56:01 crc kubenswrapper[4812]: I0216 13:56:01.692178 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/938fb099-5861-4d4f-8105-bcd26cbbcabd-combined-ca-bundle\") pod \"938fb099-5861-4d4f-8105-bcd26cbbcabd\" (UID: \"938fb099-5861-4d4f-8105-bcd26cbbcabd\") " Feb 16 13:56:01 crc kubenswrapper[4812]: I0216 13:56:01.692248 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/938fb099-5861-4d4f-8105-bcd26cbbcabd-config-data\") pod \"938fb099-5861-4d4f-8105-bcd26cbbcabd\" (UID: \"938fb099-5861-4d4f-8105-bcd26cbbcabd\") " Feb 16 13:56:01 crc kubenswrapper[4812]: I0216 13:56:01.692668 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sprc8\" (UniqueName: \"kubernetes.io/projected/938fb099-5861-4d4f-8105-bcd26cbbcabd-kube-api-access-sprc8\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:01 crc kubenswrapper[4812]: I0216 13:56:01.692723 4812 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/938fb099-5861-4d4f-8105-bcd26cbbcabd-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:01 crc kubenswrapper[4812]: I0216 13:56:01.732398 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/938fb099-5861-4d4f-8105-bcd26cbbcabd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "938fb099-5861-4d4f-8105-bcd26cbbcabd" (UID: "938fb099-5861-4d4f-8105-bcd26cbbcabd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:56:01 crc kubenswrapper[4812]: I0216 13:56:01.784177 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/938fb099-5861-4d4f-8105-bcd26cbbcabd-config-data" (OuterVolumeSpecName: "config-data") pod "938fb099-5861-4d4f-8105-bcd26cbbcabd" (UID: "938fb099-5861-4d4f-8105-bcd26cbbcabd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:56:01 crc kubenswrapper[4812]: I0216 13:56:01.795740 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/938fb099-5861-4d4f-8105-bcd26cbbcabd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:01 crc kubenswrapper[4812]: I0216 13:56:01.795783 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/938fb099-5861-4d4f-8105-bcd26cbbcabd-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:02 crc kubenswrapper[4812]: I0216 13:56:02.018707 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-775b4c67cd-9n6f8" event={"ID":"938fb099-5861-4d4f-8105-bcd26cbbcabd","Type":"ContainerDied","Data":"e2ed4eace6eda27e37954409d269577e6fcfd7e387717a956447c1d0da44e2b3"} Feb 16 13:56:02 crc kubenswrapper[4812]: I0216 13:56:02.018779 4812 scope.go:117] "RemoveContainer" containerID="99df326f5bca081a2c902aa913c70c7ca3924a5448a2a5ff7018d960f3d48b33" Feb 16 13:56:02 crc kubenswrapper[4812]: I0216 13:56:02.018971 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-775b4c67cd-9n6f8" Feb 16 13:56:02 crc kubenswrapper[4812]: I0216 13:56:02.038961 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d67c77f6c-gcgq7" event={"ID":"49cfe7b6-0403-4fae-8c40-9fdec91bceee","Type":"ContainerStarted","Data":"66dae87cac4afd029395ef4c79dbded68916c84e055d44514c9b59798bf731c4"} Feb 16 13:56:02 crc kubenswrapper[4812]: I0216 13:56:02.039034 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d67c77f6c-gcgq7" event={"ID":"49cfe7b6-0403-4fae-8c40-9fdec91bceee","Type":"ContainerStarted","Data":"b1e0945d786cfae061558e072ec64c2e59391e7779c53219eeb4f86c90d5c6f6"} Feb 16 13:56:02 crc kubenswrapper[4812]: I0216 13:56:02.039053 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6d67c77f6c-gcgq7" Feb 16 13:56:02 crc kubenswrapper[4812]: I0216 13:56:02.039178 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6d67c77f6c-gcgq7" Feb 16 13:56:02 crc kubenswrapper[4812]: I0216 13:56:02.066509 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-775b4c67cd-9n6f8"] Feb 16 13:56:02 crc kubenswrapper[4812]: I0216 13:56:02.076545 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-775b4c67cd-9n6f8"] Feb 16 13:56:02 crc kubenswrapper[4812]: I0216 13:56:02.090807 4812 scope.go:117] "RemoveContainer" containerID="6a8312078a00af4cd140ab5529cf65ddfed82d96b8d60bbab70197276af0dc09" Feb 16 13:56:02 crc kubenswrapper[4812]: I0216 13:56:02.091760 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:56:02 crc kubenswrapper[4812]: I0216 13:56:02.092245 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b3ea4e67-222a-4f37-9d17-371c857ef7c4" containerName="ceilometer-central-agent" containerID="cri-o://477c06b4ba80370c1ae5846f1b3c91e49c6ec8c823f8fb9b2fdad46011225d66" gracePeriod=30 Feb 16 13:56:02 crc kubenswrapper[4812]: I0216 13:56:02.092515 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b3ea4e67-222a-4f37-9d17-371c857ef7c4" containerName="ceilometer-notification-agent" containerID="cri-o://5964725c040789733634d317ab49e2313fbaf11ec314384a28443cd22a3b1be7" gracePeriod=30 Feb 16 13:56:02 crc kubenswrapper[4812]: I0216 13:56:02.092432 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b3ea4e67-222a-4f37-9d17-371c857ef7c4" containerName="sg-core" containerID="cri-o://11d1f796f6c4fbd7354fe51f676de7d0d54912b105777cdecb7f20c0defd22ac" gracePeriod=30 Feb 16 13:56:02 crc kubenswrapper[4812]: I0216 13:56:02.092689 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b3ea4e67-222a-4f37-9d17-371c857ef7c4" containerName="proxy-httpd" containerID="cri-o://ef0d1891d71d6d74ce69141dc770e23d61636db3fd3014496b8d3f465d7e4072" gracePeriod=30 Feb 16 13:56:02 crc kubenswrapper[4812]: I0216 13:56:02.142579 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6d67c77f6c-gcgq7" podStartSLOduration=3.142544159 podStartE2EDuration="3.142544159s" podCreationTimestamp="2026-02-16 13:55:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:56:02.095149081 +0000 UTC m=+1451.159479792" watchObservedRunningTime="2026-02-16 13:56:02.142544159 +0000 UTC m=+1451.206874860" Feb 16 13:56:02 crc kubenswrapper[4812]: I0216 13:56:02.315155 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 16 13:56:03 crc kubenswrapper[4812]: I0216 13:56:03.081415 4812 generic.go:334] "Generic (PLEG): container finished" podID="b3ea4e67-222a-4f37-9d17-371c857ef7c4" containerID="ef0d1891d71d6d74ce69141dc770e23d61636db3fd3014496b8d3f465d7e4072" exitCode=0 Feb 16 13:56:03 crc kubenswrapper[4812]: I0216 13:56:03.081505 4812 generic.go:334] "Generic (PLEG): container finished" podID="b3ea4e67-222a-4f37-9d17-371c857ef7c4" containerID="11d1f796f6c4fbd7354fe51f676de7d0d54912b105777cdecb7f20c0defd22ac" exitCode=2 Feb 16 13:56:03 crc kubenswrapper[4812]: I0216 13:56:03.081517 4812 generic.go:334] "Generic (PLEG): container finished" podID="b3ea4e67-222a-4f37-9d17-371c857ef7c4" containerID="5964725c040789733634d317ab49e2313fbaf11ec314384a28443cd22a3b1be7" exitCode=0 Feb 16 13:56:03 crc kubenswrapper[4812]: I0216 13:56:03.081530 4812 generic.go:334] "Generic (PLEG): container finished" podID="b3ea4e67-222a-4f37-9d17-371c857ef7c4" containerID="477c06b4ba80370c1ae5846f1b3c91e49c6ec8c823f8fb9b2fdad46011225d66" exitCode=0 Feb 16 13:56:03 crc kubenswrapper[4812]: I0216 13:56:03.081528 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3ea4e67-222a-4f37-9d17-371c857ef7c4","Type":"ContainerDied","Data":"ef0d1891d71d6d74ce69141dc770e23d61636db3fd3014496b8d3f465d7e4072"} Feb 16 13:56:03 crc kubenswrapper[4812]: I0216 13:56:03.081624 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3ea4e67-222a-4f37-9d17-371c857ef7c4","Type":"ContainerDied","Data":"11d1f796f6c4fbd7354fe51f676de7d0d54912b105777cdecb7f20c0defd22ac"} Feb 16 13:56:03 crc kubenswrapper[4812]: I0216 13:56:03.081637 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3ea4e67-222a-4f37-9d17-371c857ef7c4","Type":"ContainerDied","Data":"5964725c040789733634d317ab49e2313fbaf11ec314384a28443cd22a3b1be7"} Feb 16 13:56:03 crc kubenswrapper[4812]: I0216 13:56:03.081648 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3ea4e67-222a-4f37-9d17-371c857ef7c4","Type":"ContainerDied","Data":"477c06b4ba80370c1ae5846f1b3c91e49c6ec8c823f8fb9b2fdad46011225d66"} Feb 16 13:56:03 crc kubenswrapper[4812]: I0216 13:56:03.639669 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:56:03 crc kubenswrapper[4812]: I0216 13:56:03.820132 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3ea4e67-222a-4f37-9d17-371c857ef7c4-run-httpd\") pod \"b3ea4e67-222a-4f37-9d17-371c857ef7c4\" (UID: \"b3ea4e67-222a-4f37-9d17-371c857ef7c4\") " Feb 16 13:56:03 crc kubenswrapper[4812]: I0216 13:56:03.820711 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3ea4e67-222a-4f37-9d17-371c857ef7c4-combined-ca-bundle\") pod \"b3ea4e67-222a-4f37-9d17-371c857ef7c4\" (UID: \"b3ea4e67-222a-4f37-9d17-371c857ef7c4\") " Feb 16 13:56:03 crc kubenswrapper[4812]: I0216 13:56:03.820790 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3ea4e67-222a-4f37-9d17-371c857ef7c4-sg-core-conf-yaml\") pod \"b3ea4e67-222a-4f37-9d17-371c857ef7c4\" (UID: \"b3ea4e67-222a-4f37-9d17-371c857ef7c4\") " Feb 16 13:56:03 crc kubenswrapper[4812]: I0216 13:56:03.820857 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7j25m\" (UniqueName: \"kubernetes.io/projected/b3ea4e67-222a-4f37-9d17-371c857ef7c4-kube-api-access-7j25m\") pod \"b3ea4e67-222a-4f37-9d17-371c857ef7c4\" (UID: \"b3ea4e67-222a-4f37-9d17-371c857ef7c4\") " Feb 16 13:56:03 crc kubenswrapper[4812]: I0216 13:56:03.820904 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3ea4e67-222a-4f37-9d17-371c857ef7c4-scripts\") pod \"b3ea4e67-222a-4f37-9d17-371c857ef7c4\" (UID: \"b3ea4e67-222a-4f37-9d17-371c857ef7c4\") " Feb 16 13:56:03 crc kubenswrapper[4812]: I0216 13:56:03.821230 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3ea4e67-222a-4f37-9d17-371c857ef7c4-config-data\") pod \"b3ea4e67-222a-4f37-9d17-371c857ef7c4\" (UID: \"b3ea4e67-222a-4f37-9d17-371c857ef7c4\") " Feb 16 13:56:03 crc kubenswrapper[4812]: I0216 13:56:03.821357 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3ea4e67-222a-4f37-9d17-371c857ef7c4-log-httpd\") pod \"b3ea4e67-222a-4f37-9d17-371c857ef7c4\" (UID: \"b3ea4e67-222a-4f37-9d17-371c857ef7c4\") " Feb 16 13:56:03 crc kubenswrapper[4812]: I0216 13:56:03.822751 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3ea4e67-222a-4f37-9d17-371c857ef7c4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b3ea4e67-222a-4f37-9d17-371c857ef7c4" (UID: "b3ea4e67-222a-4f37-9d17-371c857ef7c4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:56:03 crc kubenswrapper[4812]: I0216 13:56:03.823815 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3ea4e67-222a-4f37-9d17-371c857ef7c4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b3ea4e67-222a-4f37-9d17-371c857ef7c4" (UID: "b3ea4e67-222a-4f37-9d17-371c857ef7c4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:56:03 crc kubenswrapper[4812]: I0216 13:56:03.861164 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3ea4e67-222a-4f37-9d17-371c857ef7c4-kube-api-access-7j25m" (OuterVolumeSpecName: "kube-api-access-7j25m") pod "b3ea4e67-222a-4f37-9d17-371c857ef7c4" (UID: "b3ea4e67-222a-4f37-9d17-371c857ef7c4"). InnerVolumeSpecName "kube-api-access-7j25m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:56:03 crc kubenswrapper[4812]: I0216 13:56:03.886761 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3ea4e67-222a-4f37-9d17-371c857ef7c4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b3ea4e67-222a-4f37-9d17-371c857ef7c4" (UID: "b3ea4e67-222a-4f37-9d17-371c857ef7c4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:56:03 crc kubenswrapper[4812]: I0216 13:56:03.896080 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3ea4e67-222a-4f37-9d17-371c857ef7c4-scripts" (OuterVolumeSpecName: "scripts") pod "b3ea4e67-222a-4f37-9d17-371c857ef7c4" (UID: "b3ea4e67-222a-4f37-9d17-371c857ef7c4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:56:03 crc kubenswrapper[4812]: I0216 13:56:03.925614 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="938fb099-5861-4d4f-8105-bcd26cbbcabd" path="/var/lib/kubelet/pods/938fb099-5861-4d4f-8105-bcd26cbbcabd/volumes" Feb 16 13:56:03 crc kubenswrapper[4812]: I0216 13:56:03.927384 4812 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3ea4e67-222a-4f37-9d17-371c857ef7c4-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:03 crc kubenswrapper[4812]: I0216 13:56:03.927434 4812 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3ea4e67-222a-4f37-9d17-371c857ef7c4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:03 crc kubenswrapper[4812]: I0216 13:56:03.927527 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7j25m\" (UniqueName: \"kubernetes.io/projected/b3ea4e67-222a-4f37-9d17-371c857ef7c4-kube-api-access-7j25m\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:03 crc kubenswrapper[4812]: I0216 13:56:03.927580 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3ea4e67-222a-4f37-9d17-371c857ef7c4-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:03 crc kubenswrapper[4812]: I0216 13:56:03.927598 4812 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3ea4e67-222a-4f37-9d17-371c857ef7c4-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.028859 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3ea4e67-222a-4f37-9d17-371c857ef7c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3ea4e67-222a-4f37-9d17-371c857ef7c4" (UID: "b3ea4e67-222a-4f37-9d17-371c857ef7c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.031948 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3ea4e67-222a-4f37-9d17-371c857ef7c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.068811 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3ea4e67-222a-4f37-9d17-371c857ef7c4-config-data" (OuterVolumeSpecName: "config-data") pod "b3ea4e67-222a-4f37-9d17-371c857ef7c4" (UID: "b3ea4e67-222a-4f37-9d17-371c857ef7c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.135642 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3ea4e67-222a-4f37-9d17-371c857ef7c4-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.135810 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3ea4e67-222a-4f37-9d17-371c857ef7c4","Type":"ContainerDied","Data":"ab82d7434e0e5eafde2202594edaa832b51173c34f47c36eac8a45c2ef7ea76b"} Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.135902 4812 scope.go:117] "RemoveContainer" containerID="ef0d1891d71d6d74ce69141dc770e23d61636db3fd3014496b8d3f465d7e4072" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.136153 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.201189 4812 scope.go:117] "RemoveContainer" containerID="11d1f796f6c4fbd7354fe51f676de7d0d54912b105777cdecb7f20c0defd22ac" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.212814 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.227498 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.252320 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:56:04 crc kubenswrapper[4812]: E0216 13:56:04.253370 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="938fb099-5861-4d4f-8105-bcd26cbbcabd" containerName="barbican-api" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.253398 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="938fb099-5861-4d4f-8105-bcd26cbbcabd" containerName="barbican-api" Feb 16 13:56:04 crc kubenswrapper[4812]: E0216 13:56:04.253408 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3ea4e67-222a-4f37-9d17-371c857ef7c4" containerName="sg-core" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.253416 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ea4e67-222a-4f37-9d17-371c857ef7c4" containerName="sg-core" Feb 16 13:56:04 crc kubenswrapper[4812]: E0216 13:56:04.253453 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3ea4e67-222a-4f37-9d17-371c857ef7c4" containerName="ceilometer-central-agent" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.253460 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ea4e67-222a-4f37-9d17-371c857ef7c4" containerName="ceilometer-central-agent" Feb 16 13:56:04 crc kubenswrapper[4812]: E0216 13:56:04.253480 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3ea4e67-222a-4f37-9d17-371c857ef7c4" containerName="ceilometer-notification-agent" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.253489 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ea4e67-222a-4f37-9d17-371c857ef7c4" containerName="ceilometer-notification-agent" Feb 16 13:56:04 crc kubenswrapper[4812]: E0216 13:56:04.253498 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="938fb099-5861-4d4f-8105-bcd26cbbcabd" containerName="barbican-api-log" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.253505 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="938fb099-5861-4d4f-8105-bcd26cbbcabd" containerName="barbican-api-log" Feb 16 13:56:04 crc kubenswrapper[4812]: E0216 13:56:04.253524 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3ea4e67-222a-4f37-9d17-371c857ef7c4" containerName="proxy-httpd" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.253531 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ea4e67-222a-4f37-9d17-371c857ef7c4" containerName="proxy-httpd" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.253768 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3ea4e67-222a-4f37-9d17-371c857ef7c4" containerName="ceilometer-central-agent" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.253793 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3ea4e67-222a-4f37-9d17-371c857ef7c4" containerName="sg-core" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.253806 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="938fb099-5861-4d4f-8105-bcd26cbbcabd" containerName="barbican-api-log" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.253823 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3ea4e67-222a-4f37-9d17-371c857ef7c4" containerName="proxy-httpd" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.253835 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3ea4e67-222a-4f37-9d17-371c857ef7c4" containerName="ceilometer-notification-agent" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.253845 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="938fb099-5861-4d4f-8105-bcd26cbbcabd" containerName="barbican-api" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.258205 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.266057 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.266275 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.268784 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.312604 4812 scope.go:117] "RemoveContainer" containerID="5964725c040789733634d317ab49e2313fbaf11ec314384a28443cd22a3b1be7" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.344887 4812 scope.go:117] "RemoveContainer" containerID="477c06b4ba80370c1ae5846f1b3c91e49c6ec8c823f8fb9b2fdad46011225d66" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.443694 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97982589-9f36-48d3-929e-d6f0d2b83a3b-log-httpd\") pod \"ceilometer-0\" (UID: \"97982589-9f36-48d3-929e-d6f0d2b83a3b\") " pod="openstack/ceilometer-0" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.443966 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97982589-9f36-48d3-929e-d6f0d2b83a3b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"97982589-9f36-48d3-929e-d6f0d2b83a3b\") " pod="openstack/ceilometer-0" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.444416 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97982589-9f36-48d3-929e-d6f0d2b83a3b-scripts\") pod \"ceilometer-0\" (UID: \"97982589-9f36-48d3-929e-d6f0d2b83a3b\") " pod="openstack/ceilometer-0" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.444686 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97982589-9f36-48d3-929e-d6f0d2b83a3b-config-data\") pod \"ceilometer-0\" (UID: \"97982589-9f36-48d3-929e-d6f0d2b83a3b\") " pod="openstack/ceilometer-0" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.444845 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/97982589-9f36-48d3-929e-d6f0d2b83a3b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"97982589-9f36-48d3-929e-d6f0d2b83a3b\") " pod="openstack/ceilometer-0" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.444941 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbzpw\" (UniqueName: \"kubernetes.io/projected/97982589-9f36-48d3-929e-d6f0d2b83a3b-kube-api-access-jbzpw\") pod \"ceilometer-0\" (UID: \"97982589-9f36-48d3-929e-d6f0d2b83a3b\") " pod="openstack/ceilometer-0" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.445012 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97982589-9f36-48d3-929e-d6f0d2b83a3b-run-httpd\") pod \"ceilometer-0\" (UID: \"97982589-9f36-48d3-929e-d6f0d2b83a3b\") " pod="openstack/ceilometer-0" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.548788 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97982589-9f36-48d3-929e-d6f0d2b83a3b-config-data\") pod \"ceilometer-0\" (UID: \"97982589-9f36-48d3-929e-d6f0d2b83a3b\") " pod="openstack/ceilometer-0" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.548940 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/97982589-9f36-48d3-929e-d6f0d2b83a3b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"97982589-9f36-48d3-929e-d6f0d2b83a3b\") " pod="openstack/ceilometer-0" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.548998 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbzpw\" (UniqueName: \"kubernetes.io/projected/97982589-9f36-48d3-929e-d6f0d2b83a3b-kube-api-access-jbzpw\") pod \"ceilometer-0\" (UID: \"97982589-9f36-48d3-929e-d6f0d2b83a3b\") " pod="openstack/ceilometer-0" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.549034 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97982589-9f36-48d3-929e-d6f0d2b83a3b-run-httpd\") pod \"ceilometer-0\" (UID: \"97982589-9f36-48d3-929e-d6f0d2b83a3b\") " pod="openstack/ceilometer-0" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.549087 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97982589-9f36-48d3-929e-d6f0d2b83a3b-log-httpd\") pod \"ceilometer-0\" (UID: \"97982589-9f36-48d3-929e-d6f0d2b83a3b\") " pod="openstack/ceilometer-0" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.549215 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97982589-9f36-48d3-929e-d6f0d2b83a3b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"97982589-9f36-48d3-929e-d6f0d2b83a3b\") " pod="openstack/ceilometer-0" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.549349 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97982589-9f36-48d3-929e-d6f0d2b83a3b-scripts\") pod \"ceilometer-0\" (UID: \"97982589-9f36-48d3-929e-d6f0d2b83a3b\") " pod="openstack/ceilometer-0" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.550795 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97982589-9f36-48d3-929e-d6f0d2b83a3b-log-httpd\") pod \"ceilometer-0\" (UID: \"97982589-9f36-48d3-929e-d6f0d2b83a3b\") " pod="openstack/ceilometer-0" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.557270 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/97982589-9f36-48d3-929e-d6f0d2b83a3b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"97982589-9f36-48d3-929e-d6f0d2b83a3b\") " pod="openstack/ceilometer-0" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.560557 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97982589-9f36-48d3-929e-d6f0d2b83a3b-scripts\") pod \"ceilometer-0\" (UID: \"97982589-9f36-48d3-929e-d6f0d2b83a3b\") " pod="openstack/ceilometer-0" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.561216 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97982589-9f36-48d3-929e-d6f0d2b83a3b-run-httpd\") pod \"ceilometer-0\" (UID: \"97982589-9f36-48d3-929e-d6f0d2b83a3b\") " pod="openstack/ceilometer-0" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.573495 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97982589-9f36-48d3-929e-d6f0d2b83a3b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"97982589-9f36-48d3-929e-d6f0d2b83a3b\") " pod="openstack/ceilometer-0" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.575961 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbzpw\" (UniqueName: \"kubernetes.io/projected/97982589-9f36-48d3-929e-d6f0d2b83a3b-kube-api-access-jbzpw\") pod \"ceilometer-0\" (UID: \"97982589-9f36-48d3-929e-d6f0d2b83a3b\") " pod="openstack/ceilometer-0" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.577108 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97982589-9f36-48d3-929e-d6f0d2b83a3b-config-data\") pod \"ceilometer-0\" (UID: \"97982589-9f36-48d3-929e-d6f0d2b83a3b\") " pod="openstack/ceilometer-0" Feb 16 13:56:04 crc kubenswrapper[4812]: I0216 13:56:04.611777 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:56:05 crc kubenswrapper[4812]: I0216 13:56:05.233013 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:56:05 crc kubenswrapper[4812]: I0216 13:56:05.910035 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3ea4e67-222a-4f37-9d17-371c857ef7c4" path="/var/lib/kubelet/pods/b3ea4e67-222a-4f37-9d17-371c857ef7c4/volumes" Feb 16 13:56:06 crc kubenswrapper[4812]: I0216 13:56:06.177532 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"97982589-9f36-48d3-929e-d6f0d2b83a3b","Type":"ContainerStarted","Data":"7b80b5f65f40b757c55e276d6161b9385de1865b8c2a4728d286595a4482655a"} Feb 16 13:56:06 crc kubenswrapper[4812]: I0216 13:56:06.419867 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 16 13:56:08 crc kubenswrapper[4812]: E0216 13:56:08.883347 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 13:56:09 crc kubenswrapper[4812]: I0216 13:56:09.545697 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6d67c77f6c-gcgq7" Feb 16 13:56:09 crc kubenswrapper[4812]: I0216 13:56:09.549848 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6d67c77f6c-gcgq7" Feb 16 13:56:10 crc kubenswrapper[4812]: I0216 13:56:10.047278 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:56:14 crc kubenswrapper[4812]: I0216 13:56:14.201419 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5f575bfd48-dqv2k" Feb 16 13:56:14 crc kubenswrapper[4812]: I0216 13:56:14.214061 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5f575bfd48-dqv2k" Feb 16 13:56:14 crc kubenswrapper[4812]: I0216 13:56:14.297472 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"97982589-9f36-48d3-929e-d6f0d2b83a3b","Type":"ContainerStarted","Data":"5b04b8705207f032c0b920d34c5c7af28f4a8e3af57a3ea22c2a52fbf1b98aff"} Feb 16 13:56:14 crc kubenswrapper[4812]: I0216 13:56:14.353153 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5dd887c4d-zfnsh"] Feb 16 13:56:14 crc kubenswrapper[4812]: I0216 13:56:14.353641 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5dd887c4d-zfnsh" podUID="ebf72004-b885-40eb-94ca-bce1652d96c1" containerName="placement-log" containerID="cri-o://f33c89b59ef51d4494f4fb0e3233676426e29b9509f7126a1839a7b0a9dbe451" gracePeriod=30 Feb 16 13:56:14 crc kubenswrapper[4812]: I0216 13:56:14.354363 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5dd887c4d-zfnsh" podUID="ebf72004-b885-40eb-94ca-bce1652d96c1" containerName="placement-api" containerID="cri-o://cc93fe2a9bb702e5e3ca0d79e36bea97cb6172995231ff61d0befbabc803bd4a" gracePeriod=30 Feb 16 13:56:15 crc kubenswrapper[4812]: I0216 13:56:15.323434 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"528da5b1-5cfd-42dd-bfaf-ad82eb579d97","Type":"ContainerStarted","Data":"48782ae61e3c3f88695eaf5bbfe965d3907aaefa70d36df41f3b4cb6a9c2544b"} Feb 16 13:56:15 crc kubenswrapper[4812]: I0216 13:56:15.332153 4812 generic.go:334] "Generic (PLEG): container finished" podID="ebf72004-b885-40eb-94ca-bce1652d96c1" containerID="f33c89b59ef51d4494f4fb0e3233676426e29b9509f7126a1839a7b0a9dbe451" exitCode=143 Feb 16 13:56:15 crc kubenswrapper[4812]: I0216 13:56:15.332263 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5dd887c4d-zfnsh" event={"ID":"ebf72004-b885-40eb-94ca-bce1652d96c1","Type":"ContainerDied","Data":"f33c89b59ef51d4494f4fb0e3233676426e29b9509f7126a1839a7b0a9dbe451"} Feb 16 13:56:15 crc kubenswrapper[4812]: I0216 13:56:15.336892 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"97982589-9f36-48d3-929e-d6f0d2b83a3b","Type":"ContainerStarted","Data":"5ea5abde6d8dd7532d9530966638b13490d2d4e865f2f8065f792940399033f1"} Feb 16 13:56:15 crc kubenswrapper[4812]: I0216 13:56:15.358589 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.891669088 podStartE2EDuration="27.358553325s" podCreationTimestamp="2026-02-16 13:55:48 +0000 UTC" firstStartedPulling="2026-02-16 13:55:50.515852678 +0000 UTC m=+1439.580183379" lastFinishedPulling="2026-02-16 13:56:13.982736915 +0000 UTC m=+1463.047067616" observedRunningTime="2026-02-16 13:56:15.351609613 +0000 UTC m=+1464.415940314" watchObservedRunningTime="2026-02-16 13:56:15.358553325 +0000 UTC m=+1464.422884026" Feb 16 13:56:15 crc kubenswrapper[4812]: I0216 13:56:15.558658 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:56:15 crc kubenswrapper[4812]: I0216 13:56:15.559051 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a79d4b09-3b4f-4594-bda3-f219239f9471" containerName="glance-log" containerID="cri-o://35213eae1d814955f16fa3ba42805f9dee1bddd82b8380c11c48e92e6ea732ef" gracePeriod=30 Feb 16 13:56:15 crc kubenswrapper[4812]: I0216 13:56:15.559154 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a79d4b09-3b4f-4594-bda3-f219239f9471" containerName="glance-httpd" containerID="cri-o://2d1456daa0d1baefd2d6d972a9e3f35748a009fd90208a4d1f18d777f484dcd7" gracePeriod=30 Feb 16 13:56:16 crc kubenswrapper[4812]: I0216 13:56:16.366117 4812 generic.go:334] "Generic (PLEG): container finished" podID="a79d4b09-3b4f-4594-bda3-f219239f9471" containerID="35213eae1d814955f16fa3ba42805f9dee1bddd82b8380c11c48e92e6ea732ef" exitCode=143 Feb 16 13:56:16 crc kubenswrapper[4812]: I0216 13:56:16.366231 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a79d4b09-3b4f-4594-bda3-f219239f9471","Type":"ContainerDied","Data":"35213eae1d814955f16fa3ba42805f9dee1bddd82b8380c11c48e92e6ea732ef"} Feb 16 13:56:16 crc kubenswrapper[4812]: I0216 13:56:16.383977 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"97982589-9f36-48d3-929e-d6f0d2b83a3b","Type":"ContainerStarted","Data":"8cf68110f58598903c08b5a1ffccfdf688bc4f7be44022c86c9feb388cc6a7b9"} Feb 16 13:56:16 crc kubenswrapper[4812]: I0216 13:56:16.458038 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5687b6b775-mt8dp" Feb 16 13:56:16 crc kubenswrapper[4812]: I0216 13:56:16.639302 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-86c4db556-7x7cc"] Feb 16 13:56:16 crc kubenswrapper[4812]: I0216 13:56:16.640801 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-86c4db556-7x7cc" podUID="735893be-02d4-49a0-af55-787ea0f940cb" containerName="neutron-api" containerID="cri-o://6b8ce315f7c192cde51e62a7726d33382abf2bfe0aa63c81508e58d9af332537" gracePeriod=30 Feb 16 13:56:16 crc kubenswrapper[4812]: I0216 13:56:16.641841 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-86c4db556-7x7cc" podUID="735893be-02d4-49a0-af55-787ea0f940cb" containerName="neutron-httpd" containerID="cri-o://f714ab7e99824f80a0244828f5d93b6625f0548c7fe3b9e53c455da66a0a13c9" gracePeriod=30 Feb 16 13:56:17 crc kubenswrapper[4812]: I0216 13:56:17.437300 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"97982589-9f36-48d3-929e-d6f0d2b83a3b","Type":"ContainerStarted","Data":"e033f6e74900f7c0d42acf046887ffc6b649134a2e39d95043c1ba9645f96c76"} Feb 16 13:56:17 crc kubenswrapper[4812]: I0216 13:56:17.437785 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="97982589-9f36-48d3-929e-d6f0d2b83a3b" containerName="ceilometer-central-agent" containerID="cri-o://5b04b8705207f032c0b920d34c5c7af28f4a8e3af57a3ea22c2a52fbf1b98aff" gracePeriod=30 Feb 16 13:56:17 crc kubenswrapper[4812]: I0216 13:56:17.438025 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 13:56:17 crc kubenswrapper[4812]: I0216 13:56:17.438116 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="97982589-9f36-48d3-929e-d6f0d2b83a3b" containerName="proxy-httpd" containerID="cri-o://e033f6e74900f7c0d42acf046887ffc6b649134a2e39d95043c1ba9645f96c76" gracePeriod=30 Feb 16 13:56:17 crc kubenswrapper[4812]: I0216 13:56:17.438272 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="97982589-9f36-48d3-929e-d6f0d2b83a3b" containerName="sg-core" containerID="cri-o://8cf68110f58598903c08b5a1ffccfdf688bc4f7be44022c86c9feb388cc6a7b9" gracePeriod=30 Feb 16 13:56:17 crc kubenswrapper[4812]: I0216 13:56:17.438323 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="97982589-9f36-48d3-929e-d6f0d2b83a3b" containerName="ceilometer-notification-agent" containerID="cri-o://5ea5abde6d8dd7532d9530966638b13490d2d4e865f2f8065f792940399033f1" gracePeriod=30 Feb 16 13:56:17 crc kubenswrapper[4812]: I0216 13:56:17.456041 4812 generic.go:334] "Generic (PLEG): container finished" podID="735893be-02d4-49a0-af55-787ea0f940cb" containerID="f714ab7e99824f80a0244828f5d93b6625f0548c7fe3b9e53c455da66a0a13c9" exitCode=0 Feb 16 13:56:17 crc kubenswrapper[4812]: I0216 13:56:17.456156 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86c4db556-7x7cc" event={"ID":"735893be-02d4-49a0-af55-787ea0f940cb","Type":"ContainerDied","Data":"f714ab7e99824f80a0244828f5d93b6625f0548c7fe3b9e53c455da66a0a13c9"} Feb 16 13:56:17 crc kubenswrapper[4812]: I0216 13:56:17.481505 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.8377731179999999 podStartE2EDuration="13.481469132s" podCreationTimestamp="2026-02-16 13:56:04 +0000 UTC" firstStartedPulling="2026-02-16 13:56:05.236515851 +0000 UTC m=+1454.300846552" lastFinishedPulling="2026-02-16 13:56:16.880211865 +0000 UTC m=+1465.944542566" observedRunningTime="2026-02-16 13:56:17.470228425 +0000 UTC m=+1466.534559136" watchObservedRunningTime="2026-02-16 13:56:17.481469132 +0000 UTC m=+1466.545799833" Feb 16 13:56:18 crc kubenswrapper[4812]: I0216 13:56:18.397495 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5dd887c4d-zfnsh" Feb 16 13:56:18 crc kubenswrapper[4812]: I0216 13:56:18.419794 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebf72004-b885-40eb-94ca-bce1652d96c1-config-data\") pod \"ebf72004-b885-40eb-94ca-bce1652d96c1\" (UID: \"ebf72004-b885-40eb-94ca-bce1652d96c1\") " Feb 16 13:56:18 crc kubenswrapper[4812]: I0216 13:56:18.420007 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebf72004-b885-40eb-94ca-bce1652d96c1-internal-tls-certs\") pod \"ebf72004-b885-40eb-94ca-bce1652d96c1\" (UID: \"ebf72004-b885-40eb-94ca-bce1652d96c1\") " Feb 16 13:56:18 crc kubenswrapper[4812]: I0216 13:56:18.420061 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebf72004-b885-40eb-94ca-bce1652d96c1-scripts\") pod \"ebf72004-b885-40eb-94ca-bce1652d96c1\" (UID: \"ebf72004-b885-40eb-94ca-bce1652d96c1\") " Feb 16 13:56:18 crc kubenswrapper[4812]: I0216 13:56:18.420120 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hcmv\" (UniqueName: \"kubernetes.io/projected/ebf72004-b885-40eb-94ca-bce1652d96c1-kube-api-access-9hcmv\") pod \"ebf72004-b885-40eb-94ca-bce1652d96c1\" (UID: \"ebf72004-b885-40eb-94ca-bce1652d96c1\") " Feb 16 13:56:18 crc kubenswrapper[4812]: I0216 13:56:18.420149 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebf72004-b885-40eb-94ca-bce1652d96c1-combined-ca-bundle\") pod \"ebf72004-b885-40eb-94ca-bce1652d96c1\" (UID: \"ebf72004-b885-40eb-94ca-bce1652d96c1\") " Feb 16 13:56:18 crc kubenswrapper[4812]: I0216 13:56:18.420174 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebf72004-b885-40eb-94ca-bce1652d96c1-logs\") pod \"ebf72004-b885-40eb-94ca-bce1652d96c1\" (UID: \"ebf72004-b885-40eb-94ca-bce1652d96c1\") " Feb 16 13:56:18 crc kubenswrapper[4812]: I0216 13:56:18.420223 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebf72004-b885-40eb-94ca-bce1652d96c1-public-tls-certs\") pod \"ebf72004-b885-40eb-94ca-bce1652d96c1\" (UID: \"ebf72004-b885-40eb-94ca-bce1652d96c1\") " Feb 16 13:56:18 crc kubenswrapper[4812]: I0216 13:56:18.421350 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebf72004-b885-40eb-94ca-bce1652d96c1-logs" (OuterVolumeSpecName: "logs") pod "ebf72004-b885-40eb-94ca-bce1652d96c1" (UID: "ebf72004-b885-40eb-94ca-bce1652d96c1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:56:18 crc kubenswrapper[4812]: I0216 13:56:18.547143 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebf72004-b885-40eb-94ca-bce1652d96c1-kube-api-access-9hcmv" (OuterVolumeSpecName: "kube-api-access-9hcmv") pod "ebf72004-b885-40eb-94ca-bce1652d96c1" (UID: "ebf72004-b885-40eb-94ca-bce1652d96c1"). InnerVolumeSpecName "kube-api-access-9hcmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:56:18 crc kubenswrapper[4812]: I0216 13:56:18.550573 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebf72004-b885-40eb-94ca-bce1652d96c1-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:18 crc kubenswrapper[4812]: I0216 13:56:18.597813 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebf72004-b885-40eb-94ca-bce1652d96c1-scripts" (OuterVolumeSpecName: "scripts") pod "ebf72004-b885-40eb-94ca-bce1652d96c1" (UID: "ebf72004-b885-40eb-94ca-bce1652d96c1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:56:18 crc kubenswrapper[4812]: I0216 13:56:18.604357 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebf72004-b885-40eb-94ca-bce1652d96c1-config-data" (OuterVolumeSpecName: "config-data") pod "ebf72004-b885-40eb-94ca-bce1652d96c1" (UID: "ebf72004-b885-40eb-94ca-bce1652d96c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:56:18 crc kubenswrapper[4812]: I0216 13:56:18.633301 4812 generic.go:334] "Generic (PLEG): container finished" podID="ebf72004-b885-40eb-94ca-bce1652d96c1" containerID="cc93fe2a9bb702e5e3ca0d79e36bea97cb6172995231ff61d0befbabc803bd4a" exitCode=0 Feb 16 13:56:18 crc kubenswrapper[4812]: I0216 13:56:18.633419 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5dd887c4d-zfnsh" event={"ID":"ebf72004-b885-40eb-94ca-bce1652d96c1","Type":"ContainerDied","Data":"cc93fe2a9bb702e5e3ca0d79e36bea97cb6172995231ff61d0befbabc803bd4a"} Feb 16 13:56:18 crc kubenswrapper[4812]: I0216 13:56:18.633486 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5dd887c4d-zfnsh" event={"ID":"ebf72004-b885-40eb-94ca-bce1652d96c1","Type":"ContainerDied","Data":"c005464224b7a243a46cbdcefbf58f9091d217fd54cdbb6d035788c2003367ad"} Feb 16 13:56:18 crc kubenswrapper[4812]: I0216 13:56:18.633514 4812 scope.go:117] "RemoveContainer" containerID="cc93fe2a9bb702e5e3ca0d79e36bea97cb6172995231ff61d0befbabc803bd4a" Feb 16 13:56:18 crc kubenswrapper[4812]: I0216 13:56:18.633851 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5dd887c4d-zfnsh" Feb 16 13:56:18 crc kubenswrapper[4812]: I0216 13:56:18.645761 4812 generic.go:334] "Generic (PLEG): container finished" podID="97982589-9f36-48d3-929e-d6f0d2b83a3b" containerID="8cf68110f58598903c08b5a1ffccfdf688bc4f7be44022c86c9feb388cc6a7b9" exitCode=2 Feb 16 13:56:18 crc kubenswrapper[4812]: I0216 13:56:18.646180 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"97982589-9f36-48d3-929e-d6f0d2b83a3b","Type":"ContainerDied","Data":"8cf68110f58598903c08b5a1ffccfdf688bc4f7be44022c86c9feb388cc6a7b9"} Feb 16 13:56:18 crc kubenswrapper[4812]: I0216 13:56:18.646277 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"97982589-9f36-48d3-929e-d6f0d2b83a3b","Type":"ContainerDied","Data":"5ea5abde6d8dd7532d9530966638b13490d2d4e865f2f8065f792940399033f1"} Feb 16 13:56:18 crc kubenswrapper[4812]: I0216 13:56:18.645925 4812 generic.go:334] "Generic (PLEG): container finished" podID="97982589-9f36-48d3-929e-d6f0d2b83a3b" containerID="5ea5abde6d8dd7532d9530966638b13490d2d4e865f2f8065f792940399033f1" exitCode=0 Feb 16 13:56:18 crc kubenswrapper[4812]: I0216 13:56:18.648811 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebf72004-b885-40eb-94ca-bce1652d96c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ebf72004-b885-40eb-94ca-bce1652d96c1" (UID: "ebf72004-b885-40eb-94ca-bce1652d96c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:56:18 crc kubenswrapper[4812]: I0216 13:56:18.656262 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebf72004-b885-40eb-94ca-bce1652d96c1-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:18 crc kubenswrapper[4812]: I0216 13:56:18.656398 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebf72004-b885-40eb-94ca-bce1652d96c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:18 crc kubenswrapper[4812]: I0216 13:56:18.656419 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hcmv\" (UniqueName: \"kubernetes.io/projected/ebf72004-b885-40eb-94ca-bce1652d96c1-kube-api-access-9hcmv\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:18 crc kubenswrapper[4812]: I0216 13:56:18.656434 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebf72004-b885-40eb-94ca-bce1652d96c1-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:18 crc kubenswrapper[4812]: I0216 13:56:18.682783 4812 scope.go:117] "RemoveContainer" containerID="f33c89b59ef51d4494f4fb0e3233676426e29b9509f7126a1839a7b0a9dbe451" Feb 16 13:56:18 crc kubenswrapper[4812]: I0216 13:56:18.733532 4812 scope.go:117] "RemoveContainer" containerID="cc93fe2a9bb702e5e3ca0d79e36bea97cb6172995231ff61d0befbabc803bd4a" Feb 16 13:56:18 crc kubenswrapper[4812]: E0216 13:56:18.734814 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc93fe2a9bb702e5e3ca0d79e36bea97cb6172995231ff61d0befbabc803bd4a\": container with ID starting with cc93fe2a9bb702e5e3ca0d79e36bea97cb6172995231ff61d0befbabc803bd4a not found: ID does not exist" containerID="cc93fe2a9bb702e5e3ca0d79e36bea97cb6172995231ff61d0befbabc803bd4a" Feb 16 13:56:18 crc kubenswrapper[4812]: I0216 13:56:18.734910 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc93fe2a9bb702e5e3ca0d79e36bea97cb6172995231ff61d0befbabc803bd4a"} err="failed to get container status \"cc93fe2a9bb702e5e3ca0d79e36bea97cb6172995231ff61d0befbabc803bd4a\": rpc error: code = NotFound desc = could not find container \"cc93fe2a9bb702e5e3ca0d79e36bea97cb6172995231ff61d0befbabc803bd4a\": container with ID starting with cc93fe2a9bb702e5e3ca0d79e36bea97cb6172995231ff61d0befbabc803bd4a not found: ID does not exist" Feb 16 13:56:18 crc kubenswrapper[4812]: I0216 13:56:18.734966 4812 scope.go:117] "RemoveContainer" containerID="f33c89b59ef51d4494f4fb0e3233676426e29b9509f7126a1839a7b0a9dbe451" Feb 16 13:56:18 crc kubenswrapper[4812]: I0216 13:56:18.735187 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebf72004-b885-40eb-94ca-bce1652d96c1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ebf72004-b885-40eb-94ca-bce1652d96c1" (UID: "ebf72004-b885-40eb-94ca-bce1652d96c1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:56:18 crc kubenswrapper[4812]: E0216 13:56:18.735750 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f33c89b59ef51d4494f4fb0e3233676426e29b9509f7126a1839a7b0a9dbe451\": container with ID starting with f33c89b59ef51d4494f4fb0e3233676426e29b9509f7126a1839a7b0a9dbe451 not found: ID does not exist" containerID="f33c89b59ef51d4494f4fb0e3233676426e29b9509f7126a1839a7b0a9dbe451" Feb 16 13:56:18 crc kubenswrapper[4812]: I0216 13:56:18.735835 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f33c89b59ef51d4494f4fb0e3233676426e29b9509f7126a1839a7b0a9dbe451"} err="failed to get container status \"f33c89b59ef51d4494f4fb0e3233676426e29b9509f7126a1839a7b0a9dbe451\": rpc error: code = NotFound desc = could not find container \"f33c89b59ef51d4494f4fb0e3233676426e29b9509f7126a1839a7b0a9dbe451\": container with ID starting with f33c89b59ef51d4494f4fb0e3233676426e29b9509f7126a1839a7b0a9dbe451 not found: ID does not exist" Feb 16 13:56:18 crc kubenswrapper[4812]: I0216 13:56:18.763636 4812 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebf72004-b885-40eb-94ca-bce1652d96c1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:18 crc kubenswrapper[4812]: I0216 13:56:18.772308 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebf72004-b885-40eb-94ca-bce1652d96c1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ebf72004-b885-40eb-94ca-bce1652d96c1" (UID: "ebf72004-b885-40eb-94ca-bce1652d96c1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:56:18 crc kubenswrapper[4812]: I0216 13:56:18.866884 4812 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebf72004-b885-40eb-94ca-bce1652d96c1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.030801 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5dd887c4d-zfnsh"] Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.057554 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-5dd887c4d-zfnsh"] Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.071184 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.071704 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="9999e426-9507-4791-8468-ea110c308f85" containerName="glance-log" containerID="cri-o://f79d27d5276aeca25764d3d7d4ca2d9a7af51e5d9acbb3ae528f540c15be7e69" gracePeriod=30 Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.072581 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="9999e426-9507-4791-8468-ea110c308f85" containerName="glance-httpd" containerID="cri-o://966639997bb5b81146597bbb4562f7c4cc69926b4d8b4eb769338b6cc89a729b" gracePeriod=30 Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.576991 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.666762 4812 generic.go:334] "Generic (PLEG): container finished" podID="9999e426-9507-4791-8468-ea110c308f85" containerID="f79d27d5276aeca25764d3d7d4ca2d9a7af51e5d9acbb3ae528f540c15be7e69" exitCode=143 Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.666878 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9999e426-9507-4791-8468-ea110c308f85","Type":"ContainerDied","Data":"f79d27d5276aeca25764d3d7d4ca2d9a7af51e5d9acbb3ae528f540c15be7e69"} Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.670353 4812 generic.go:334] "Generic (PLEG): container finished" podID="a79d4b09-3b4f-4594-bda3-f219239f9471" containerID="2d1456daa0d1baefd2d6d972a9e3f35748a009fd90208a4d1f18d777f484dcd7" exitCode=0 Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.670396 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a79d4b09-3b4f-4594-bda3-f219239f9471","Type":"ContainerDied","Data":"2d1456daa0d1baefd2d6d972a9e3f35748a009fd90208a4d1f18d777f484dcd7"} Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.670424 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a79d4b09-3b4f-4594-bda3-f219239f9471","Type":"ContainerDied","Data":"943f2e033809e6e6acce8e0fe0f61dd7cf86892ee8f624c4bc9fabee6a00c5a8"} Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.670490 4812 scope.go:117] "RemoveContainer" containerID="2d1456daa0d1baefd2d6d972a9e3f35748a009fd90208a4d1f18d777f484dcd7" Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.670750 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.695326 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a79d4b09-3b4f-4594-bda3-f219239f9471-httpd-run\") pod \"a79d4b09-3b4f-4594-bda3-f219239f9471\" (UID: \"a79d4b09-3b4f-4594-bda3-f219239f9471\") " Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.695397 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a79d4b09-3b4f-4594-bda3-f219239f9471-combined-ca-bundle\") pod \"a79d4b09-3b4f-4594-bda3-f219239f9471\" (UID: \"a79d4b09-3b4f-4594-bda3-f219239f9471\") " Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.695525 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfrfb\" (UniqueName: \"kubernetes.io/projected/a79d4b09-3b4f-4594-bda3-f219239f9471-kube-api-access-dfrfb\") pod \"a79d4b09-3b4f-4594-bda3-f219239f9471\" (UID: \"a79d4b09-3b4f-4594-bda3-f219239f9471\") " Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.695653 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a79d4b09-3b4f-4594-bda3-f219239f9471-public-tls-certs\") pod \"a79d4b09-3b4f-4594-bda3-f219239f9471\" (UID: \"a79d4b09-3b4f-4594-bda3-f219239f9471\") " Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.695713 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a79d4b09-3b4f-4594-bda3-f219239f9471-config-data\") pod \"a79d4b09-3b4f-4594-bda3-f219239f9471\" (UID: \"a79d4b09-3b4f-4594-bda3-f219239f9471\") " Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.695738 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a79d4b09-3b4f-4594-bda3-f219239f9471-scripts\") pod \"a79d4b09-3b4f-4594-bda3-f219239f9471\" (UID: \"a79d4b09-3b4f-4594-bda3-f219239f9471\") " Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.696032 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9031b253-b5e3-4837-8f3a-d04b16b0567d\") pod \"a79d4b09-3b4f-4594-bda3-f219239f9471\" (UID: \"a79d4b09-3b4f-4594-bda3-f219239f9471\") " Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.696116 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a79d4b09-3b4f-4594-bda3-f219239f9471-logs\") pod \"a79d4b09-3b4f-4594-bda3-f219239f9471\" (UID: \"a79d4b09-3b4f-4594-bda3-f219239f9471\") " Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.699097 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a79d4b09-3b4f-4594-bda3-f219239f9471-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a79d4b09-3b4f-4594-bda3-f219239f9471" (UID: "a79d4b09-3b4f-4594-bda3-f219239f9471"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.701917 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a79d4b09-3b4f-4594-bda3-f219239f9471-logs" (OuterVolumeSpecName: "logs") pod "a79d4b09-3b4f-4594-bda3-f219239f9471" (UID: "a79d4b09-3b4f-4594-bda3-f219239f9471"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.705801 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a79d4b09-3b4f-4594-bda3-f219239f9471-kube-api-access-dfrfb" (OuterVolumeSpecName: "kube-api-access-dfrfb") pod "a79d4b09-3b4f-4594-bda3-f219239f9471" (UID: "a79d4b09-3b4f-4594-bda3-f219239f9471"). InnerVolumeSpecName "kube-api-access-dfrfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.706942 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a79d4b09-3b4f-4594-bda3-f219239f9471-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.707027 4812 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a79d4b09-3b4f-4594-bda3-f219239f9471-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.707048 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfrfb\" (UniqueName: \"kubernetes.io/projected/a79d4b09-3b4f-4594-bda3-f219239f9471-kube-api-access-dfrfb\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.718904 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a79d4b09-3b4f-4594-bda3-f219239f9471-scripts" (OuterVolumeSpecName: "scripts") pod "a79d4b09-3b4f-4594-bda3-f219239f9471" (UID: "a79d4b09-3b4f-4594-bda3-f219239f9471"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.755858 4812 scope.go:117] "RemoveContainer" containerID="35213eae1d814955f16fa3ba42805f9dee1bddd82b8380c11c48e92e6ea732ef" Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.787122 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9031b253-b5e3-4837-8f3a-d04b16b0567d" (OuterVolumeSpecName: "glance") pod "a79d4b09-3b4f-4594-bda3-f219239f9471" (UID: "a79d4b09-3b4f-4594-bda3-f219239f9471"). InnerVolumeSpecName "pvc-9031b253-b5e3-4837-8f3a-d04b16b0567d". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.812770 4812 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-9031b253-b5e3-4837-8f3a-d04b16b0567d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9031b253-b5e3-4837-8f3a-d04b16b0567d\") on node \"crc\" " Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.812825 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a79d4b09-3b4f-4594-bda3-f219239f9471-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.841859 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a79d4b09-3b4f-4594-bda3-f219239f9471-config-data" (OuterVolumeSpecName: "config-data") pod "a79d4b09-3b4f-4594-bda3-f219239f9471" (UID: "a79d4b09-3b4f-4594-bda3-f219239f9471"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.847411 4812 scope.go:117] "RemoveContainer" containerID="2d1456daa0d1baefd2d6d972a9e3f35748a009fd90208a4d1f18d777f484dcd7" Feb 16 13:56:19 crc kubenswrapper[4812]: E0216 13:56:19.848157 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d1456daa0d1baefd2d6d972a9e3f35748a009fd90208a4d1f18d777f484dcd7\": container with ID starting with 2d1456daa0d1baefd2d6d972a9e3f35748a009fd90208a4d1f18d777f484dcd7 not found: ID does not exist" containerID="2d1456daa0d1baefd2d6d972a9e3f35748a009fd90208a4d1f18d777f484dcd7" Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.848218 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d1456daa0d1baefd2d6d972a9e3f35748a009fd90208a4d1f18d777f484dcd7"} err="failed to get container status \"2d1456daa0d1baefd2d6d972a9e3f35748a009fd90208a4d1f18d777f484dcd7\": rpc error: code = NotFound desc = could not find container \"2d1456daa0d1baefd2d6d972a9e3f35748a009fd90208a4d1f18d777f484dcd7\": container with ID starting with 2d1456daa0d1baefd2d6d972a9e3f35748a009fd90208a4d1f18d777f484dcd7 not found: ID does not exist" Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.848258 4812 scope.go:117] "RemoveContainer" containerID="35213eae1d814955f16fa3ba42805f9dee1bddd82b8380c11c48e92e6ea732ef" Feb 16 13:56:19 crc kubenswrapper[4812]: E0216 13:56:19.848525 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35213eae1d814955f16fa3ba42805f9dee1bddd82b8380c11c48e92e6ea732ef\": container with ID starting with 35213eae1d814955f16fa3ba42805f9dee1bddd82b8380c11c48e92e6ea732ef not found: ID does not exist" containerID="35213eae1d814955f16fa3ba42805f9dee1bddd82b8380c11c48e92e6ea732ef" Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.848557 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35213eae1d814955f16fa3ba42805f9dee1bddd82b8380c11c48e92e6ea732ef"} err="failed to get container status \"35213eae1d814955f16fa3ba42805f9dee1bddd82b8380c11c48e92e6ea732ef\": rpc error: code = NotFound desc = could not find container \"35213eae1d814955f16fa3ba42805f9dee1bddd82b8380c11c48e92e6ea732ef\": container with ID starting with 35213eae1d814955f16fa3ba42805f9dee1bddd82b8380c11c48e92e6ea732ef not found: ID does not exist" Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.851407 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a79d4b09-3b4f-4594-bda3-f219239f9471-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a79d4b09-3b4f-4594-bda3-f219239f9471" (UID: "a79d4b09-3b4f-4594-bda3-f219239f9471"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.855701 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a79d4b09-3b4f-4594-bda3-f219239f9471-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a79d4b09-3b4f-4594-bda3-f219239f9471" (UID: "a79d4b09-3b4f-4594-bda3-f219239f9471"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.872960 4812 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.873191 4812 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-9031b253-b5e3-4837-8f3a-d04b16b0567d" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9031b253-b5e3-4837-8f3a-d04b16b0567d") on node "crc" Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.897605 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebf72004-b885-40eb-94ca-bce1652d96c1" path="/var/lib/kubelet/pods/ebf72004-b885-40eb-94ca-bce1652d96c1/volumes" Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.916578 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a79d4b09-3b4f-4594-bda3-f219239f9471-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.917020 4812 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a79d4b09-3b4f-4594-bda3-f219239f9471-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.917086 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a79d4b09-3b4f-4594-bda3-f219239f9471-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:19 crc kubenswrapper[4812]: I0216 13:56:19.917156 4812 reconciler_common.go:293] "Volume detached for volume \"pvc-9031b253-b5e3-4837-8f3a-d04b16b0567d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9031b253-b5e3-4837-8f3a-d04b16b0567d\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:20 crc kubenswrapper[4812]: I0216 13:56:20.012279 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:56:20 crc kubenswrapper[4812]: I0216 13:56:20.025395 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:56:20 crc kubenswrapper[4812]: I0216 13:56:20.050618 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:56:20 crc kubenswrapper[4812]: E0216 13:56:20.051407 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebf72004-b885-40eb-94ca-bce1652d96c1" containerName="placement-api" Feb 16 13:56:20 crc kubenswrapper[4812]: I0216 13:56:20.051457 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebf72004-b885-40eb-94ca-bce1652d96c1" containerName="placement-api" Feb 16 13:56:20 crc kubenswrapper[4812]: E0216 13:56:20.051503 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a79d4b09-3b4f-4594-bda3-f219239f9471" containerName="glance-httpd" Feb 16 13:56:20 crc kubenswrapper[4812]: I0216 13:56:20.051515 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a79d4b09-3b4f-4594-bda3-f219239f9471" containerName="glance-httpd" Feb 16 13:56:20 crc kubenswrapper[4812]: E0216 13:56:20.051561 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebf72004-b885-40eb-94ca-bce1652d96c1" containerName="placement-log" Feb 16 13:56:20 crc kubenswrapper[4812]: I0216 13:56:20.051570 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebf72004-b885-40eb-94ca-bce1652d96c1" containerName="placement-log" Feb 16 13:56:20 crc kubenswrapper[4812]: E0216 13:56:20.051586 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a79d4b09-3b4f-4594-bda3-f219239f9471" containerName="glance-log" Feb 16 13:56:20 crc kubenswrapper[4812]: I0216 13:56:20.051594 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a79d4b09-3b4f-4594-bda3-f219239f9471" containerName="glance-log" Feb 16 13:56:20 crc kubenswrapper[4812]: I0216 13:56:20.051891 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a79d4b09-3b4f-4594-bda3-f219239f9471" containerName="glance-log" Feb 16 13:56:20 crc kubenswrapper[4812]: I0216 13:56:20.051918 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebf72004-b885-40eb-94ca-bce1652d96c1" containerName="placement-api" Feb 16 13:56:20 crc kubenswrapper[4812]: I0216 13:56:20.051944 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebf72004-b885-40eb-94ca-bce1652d96c1" containerName="placement-log" Feb 16 13:56:20 crc kubenswrapper[4812]: I0216 13:56:20.051968 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a79d4b09-3b4f-4594-bda3-f219239f9471" containerName="glance-httpd" Feb 16 13:56:20 crc kubenswrapper[4812]: I0216 13:56:20.054001 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 13:56:20 crc kubenswrapper[4812]: I0216 13:56:20.058158 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 13:56:20 crc kubenswrapper[4812]: I0216 13:56:20.063879 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 13:56:20 crc kubenswrapper[4812]: I0216 13:56:20.070750 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:56:20 crc kubenswrapper[4812]: I0216 13:56:20.225244 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjs6c\" (UniqueName: \"kubernetes.io/projected/97b7cfdc-998e-4667-be36-ab781bf0fb41-kube-api-access-jjs6c\") pod \"glance-default-external-api-0\" (UID: \"97b7cfdc-998e-4667-be36-ab781bf0fb41\") " pod="openstack/glance-default-external-api-0" Feb 16 13:56:20 crc kubenswrapper[4812]: I0216 13:56:20.225543 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97b7cfdc-998e-4667-be36-ab781bf0fb41-scripts\") pod \"glance-default-external-api-0\" (UID: \"97b7cfdc-998e-4667-be36-ab781bf0fb41\") " pod="openstack/glance-default-external-api-0" Feb 16 13:56:20 crc kubenswrapper[4812]: I0216 13:56:20.225654 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97b7cfdc-998e-4667-be36-ab781bf0fb41-config-data\") pod \"glance-default-external-api-0\" (UID: \"97b7cfdc-998e-4667-be36-ab781bf0fb41\") " pod="openstack/glance-default-external-api-0" Feb 16 13:56:20 crc kubenswrapper[4812]: I0216 13:56:20.225678 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97b7cfdc-998e-4667-be36-ab781bf0fb41-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"97b7cfdc-998e-4667-be36-ab781bf0fb41\") " pod="openstack/glance-default-external-api-0" Feb 16 13:56:20 crc kubenswrapper[4812]: I0216 13:56:20.225797 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/97b7cfdc-998e-4667-be36-ab781bf0fb41-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"97b7cfdc-998e-4667-be36-ab781bf0fb41\") " pod="openstack/glance-default-external-api-0" Feb 16 13:56:20 crc kubenswrapper[4812]: I0216 13:56:20.225840 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9031b253-b5e3-4837-8f3a-d04b16b0567d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9031b253-b5e3-4837-8f3a-d04b16b0567d\") pod \"glance-default-external-api-0\" (UID: \"97b7cfdc-998e-4667-be36-ab781bf0fb41\") " pod="openstack/glance-default-external-api-0" Feb 16 13:56:20 crc kubenswrapper[4812]: I0216 13:56:20.225882 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/97b7cfdc-998e-4667-be36-ab781bf0fb41-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"97b7cfdc-998e-4667-be36-ab781bf0fb41\") " pod="openstack/glance-default-external-api-0" Feb 16 13:56:20 crc kubenswrapper[4812]: I0216 13:56:20.225901 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97b7cfdc-998e-4667-be36-ab781bf0fb41-logs\") pod \"glance-default-external-api-0\" (UID: \"97b7cfdc-998e-4667-be36-ab781bf0fb41\") " pod="openstack/glance-default-external-api-0" Feb 16 13:56:20 crc kubenswrapper[4812]: I0216 13:56:20.329379 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjs6c\" (UniqueName: \"kubernetes.io/projected/97b7cfdc-998e-4667-be36-ab781bf0fb41-kube-api-access-jjs6c\") pod \"glance-default-external-api-0\" (UID: \"97b7cfdc-998e-4667-be36-ab781bf0fb41\") " pod="openstack/glance-default-external-api-0" Feb 16 13:56:20 crc kubenswrapper[4812]: I0216 13:56:20.330040 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97b7cfdc-998e-4667-be36-ab781bf0fb41-scripts\") pod \"glance-default-external-api-0\" (UID: \"97b7cfdc-998e-4667-be36-ab781bf0fb41\") " pod="openstack/glance-default-external-api-0" Feb 16 13:56:20 crc kubenswrapper[4812]: I0216 13:56:20.330096 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97b7cfdc-998e-4667-be36-ab781bf0fb41-config-data\") pod \"glance-default-external-api-0\" (UID: \"97b7cfdc-998e-4667-be36-ab781bf0fb41\") " pod="openstack/glance-default-external-api-0" Feb 16 13:56:20 crc kubenswrapper[4812]: I0216 13:56:20.330122 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97b7cfdc-998e-4667-be36-ab781bf0fb41-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"97b7cfdc-998e-4667-be36-ab781bf0fb41\") " pod="openstack/glance-default-external-api-0" Feb 16 13:56:20 crc kubenswrapper[4812]: I0216 13:56:20.330235 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/97b7cfdc-998e-4667-be36-ab781bf0fb41-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"97b7cfdc-998e-4667-be36-ab781bf0fb41\") " pod="openstack/glance-default-external-api-0" Feb 16 13:56:20 crc kubenswrapper[4812]: I0216 13:56:20.330315 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9031b253-b5e3-4837-8f3a-d04b16b0567d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9031b253-b5e3-4837-8f3a-d04b16b0567d\") pod \"glance-default-external-api-0\" (UID: \"97b7cfdc-998e-4667-be36-ab781bf0fb41\") " pod="openstack/glance-default-external-api-0" Feb 16 13:56:20 crc kubenswrapper[4812]: I0216 13:56:20.330376 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/97b7cfdc-998e-4667-be36-ab781bf0fb41-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"97b7cfdc-998e-4667-be36-ab781bf0fb41\") " pod="openstack/glance-default-external-api-0" Feb 16 13:56:20 crc kubenswrapper[4812]: I0216 13:56:20.330411 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97b7cfdc-998e-4667-be36-ab781bf0fb41-logs\") pod \"glance-default-external-api-0\" (UID: \"97b7cfdc-998e-4667-be36-ab781bf0fb41\") " pod="openstack/glance-default-external-api-0" Feb 16 13:56:20 crc kubenswrapper[4812]: I0216 13:56:20.331396 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/97b7cfdc-998e-4667-be36-ab781bf0fb41-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"97b7cfdc-998e-4667-be36-ab781bf0fb41\") " pod="openstack/glance-default-external-api-0" Feb 16 13:56:20 crc kubenswrapper[4812]: I0216 13:56:20.333259 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97b7cfdc-998e-4667-be36-ab781bf0fb41-logs\") pod \"glance-default-external-api-0\" (UID: \"97b7cfdc-998e-4667-be36-ab781bf0fb41\") " pod="openstack/glance-default-external-api-0" Feb 16 13:56:20 crc kubenswrapper[4812]: I0216 13:56:20.334054 4812 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 13:56:20 crc kubenswrapper[4812]: I0216 13:56:20.334095 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9031b253-b5e3-4837-8f3a-d04b16b0567d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9031b253-b5e3-4837-8f3a-d04b16b0567d\") pod \"glance-default-external-api-0\" (UID: \"97b7cfdc-998e-4667-be36-ab781bf0fb41\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4ce26190c4ae61da75993487dc8cd464b862eed00b3412abb1c020ef48a7c392/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 16 13:56:20 crc kubenswrapper[4812]: I0216 13:56:20.339261 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97b7cfdc-998e-4667-be36-ab781bf0fb41-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"97b7cfdc-998e-4667-be36-ab781bf0fb41\") " pod="openstack/glance-default-external-api-0" Feb 16 13:56:20 crc kubenswrapper[4812]: I0216 13:56:20.339372 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97b7cfdc-998e-4667-be36-ab781bf0fb41-config-data\") pod \"glance-default-external-api-0\" (UID: \"97b7cfdc-998e-4667-be36-ab781bf0fb41\") " pod="openstack/glance-default-external-api-0" Feb 16 13:56:20 crc kubenswrapper[4812]: I0216 13:56:20.351877 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97b7cfdc-998e-4667-be36-ab781bf0fb41-scripts\") pod \"glance-default-external-api-0\" (UID: \"97b7cfdc-998e-4667-be36-ab781bf0fb41\") " pod="openstack/glance-default-external-api-0" Feb 16 13:56:20 crc kubenswrapper[4812]: I0216 13:56:20.353350 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/97b7cfdc-998e-4667-be36-ab781bf0fb41-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"97b7cfdc-998e-4667-be36-ab781bf0fb41\") " pod="openstack/glance-default-external-api-0" Feb 16 13:56:20 crc kubenswrapper[4812]: I0216 13:56:20.371929 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjs6c\" (UniqueName: \"kubernetes.io/projected/97b7cfdc-998e-4667-be36-ab781bf0fb41-kube-api-access-jjs6c\") pod \"glance-default-external-api-0\" (UID: \"97b7cfdc-998e-4667-be36-ab781bf0fb41\") " pod="openstack/glance-default-external-api-0" Feb 16 13:56:20 crc kubenswrapper[4812]: I0216 13:56:20.433857 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9031b253-b5e3-4837-8f3a-d04b16b0567d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9031b253-b5e3-4837-8f3a-d04b16b0567d\") pod \"glance-default-external-api-0\" (UID: \"97b7cfdc-998e-4667-be36-ab781bf0fb41\") " pod="openstack/glance-default-external-api-0" Feb 16 13:56:20 crc kubenswrapper[4812]: I0216 13:56:20.685127 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 13:56:20 crc kubenswrapper[4812]: E0216 13:56:20.883733 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 13:56:21 crc kubenswrapper[4812]: I0216 13:56:21.437515 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 13:56:21 crc kubenswrapper[4812]: I0216 13:56:21.715674 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"97b7cfdc-998e-4667-be36-ab781bf0fb41","Type":"ContainerStarted","Data":"3e4aaec542a2f58a7ed25569441c35a2aee7a65a6d8cf551355de8e0d1c5c319"} Feb 16 13:56:21 crc kubenswrapper[4812]: I0216 13:56:21.900429 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a79d4b09-3b4f-4594-bda3-f219239f9471" path="/var/lib/kubelet/pods/a79d4b09-3b4f-4594-bda3-f219239f9471/volumes" Feb 16 13:56:22 crc kubenswrapper[4812]: I0216 13:56:22.770146 4812 generic.go:334] "Generic (PLEG): container finished" podID="9999e426-9507-4791-8468-ea110c308f85" containerID="966639997bb5b81146597bbb4562f7c4cc69926b4d8b4eb769338b6cc89a729b" exitCode=0 Feb 16 13:56:22 crc kubenswrapper[4812]: I0216 13:56:22.770288 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9999e426-9507-4791-8468-ea110c308f85","Type":"ContainerDied","Data":"966639997bb5b81146597bbb4562f7c4cc69926b4d8b4eb769338b6cc89a729b"} Feb 16 13:56:22 crc kubenswrapper[4812]: I0216 13:56:22.777546 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"97b7cfdc-998e-4667-be36-ab781bf0fb41","Type":"ContainerStarted","Data":"1565e62c48b62ad4200f9542bd6b4532f79a827e07dca4140831471c2cd86341"} Feb 16 13:56:22 crc kubenswrapper[4812]: I0216 13:56:22.808695 4812 generic.go:334] "Generic (PLEG): container finished" podID="735893be-02d4-49a0-af55-787ea0f940cb" containerID="6b8ce315f7c192cde51e62a7726d33382abf2bfe0aa63c81508e58d9af332537" exitCode=0 Feb 16 13:56:22 crc kubenswrapper[4812]: I0216 13:56:22.808793 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86c4db556-7x7cc" event={"ID":"735893be-02d4-49a0-af55-787ea0f940cb","Type":"ContainerDied","Data":"6b8ce315f7c192cde51e62a7726d33382abf2bfe0aa63c81508e58d9af332537"} Feb 16 13:56:23 crc kubenswrapper[4812]: I0216 13:56:23.411774 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 13:56:23 crc kubenswrapper[4812]: I0216 13:56:23.540555 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9999e426-9507-4791-8468-ea110c308f85-internal-tls-certs\") pod \"9999e426-9507-4791-8468-ea110c308f85\" (UID: \"9999e426-9507-4791-8468-ea110c308f85\") " Feb 16 13:56:23 crc kubenswrapper[4812]: I0216 13:56:23.541284 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cttcb\" (UniqueName: \"kubernetes.io/projected/9999e426-9507-4791-8468-ea110c308f85-kube-api-access-cttcb\") pod \"9999e426-9507-4791-8468-ea110c308f85\" (UID: \"9999e426-9507-4791-8468-ea110c308f85\") " Feb 16 13:56:23 crc kubenswrapper[4812]: I0216 13:56:23.541369 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9999e426-9507-4791-8468-ea110c308f85-logs\") pod \"9999e426-9507-4791-8468-ea110c308f85\" (UID: \"9999e426-9507-4791-8468-ea110c308f85\") " Feb 16 13:56:23 crc kubenswrapper[4812]: I0216 13:56:23.541617 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9999e426-9507-4791-8468-ea110c308f85-combined-ca-bundle\") pod \"9999e426-9507-4791-8468-ea110c308f85\" (UID: \"9999e426-9507-4791-8468-ea110c308f85\") " Feb 16 13:56:23 crc kubenswrapper[4812]: I0216 13:56:23.541942 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bcbbffc-f82a-4bc1-865f-aace29f12e67\") pod \"9999e426-9507-4791-8468-ea110c308f85\" (UID: \"9999e426-9507-4791-8468-ea110c308f85\") " Feb 16 13:56:23 crc kubenswrapper[4812]: I0216 13:56:23.542073 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9999e426-9507-4791-8468-ea110c308f85-scripts\") pod \"9999e426-9507-4791-8468-ea110c308f85\" (UID: \"9999e426-9507-4791-8468-ea110c308f85\") " Feb 16 13:56:23 crc kubenswrapper[4812]: I0216 13:56:23.542227 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9999e426-9507-4791-8468-ea110c308f85-config-data\") pod \"9999e426-9507-4791-8468-ea110c308f85\" (UID: \"9999e426-9507-4791-8468-ea110c308f85\") " Feb 16 13:56:23 crc kubenswrapper[4812]: I0216 13:56:23.542262 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9999e426-9507-4791-8468-ea110c308f85-httpd-run\") pod \"9999e426-9507-4791-8468-ea110c308f85\" (UID: \"9999e426-9507-4791-8468-ea110c308f85\") " Feb 16 13:56:23 crc kubenswrapper[4812]: I0216 13:56:23.544041 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9999e426-9507-4791-8468-ea110c308f85-logs" (OuterVolumeSpecName: "logs") pod "9999e426-9507-4791-8468-ea110c308f85" (UID: "9999e426-9507-4791-8468-ea110c308f85"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:56:23 crc kubenswrapper[4812]: I0216 13:56:23.547304 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9999e426-9507-4791-8468-ea110c308f85-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:23 crc kubenswrapper[4812]: I0216 13:56:23.566396 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9999e426-9507-4791-8468-ea110c308f85-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9999e426-9507-4791-8468-ea110c308f85" (UID: "9999e426-9507-4791-8468-ea110c308f85"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:56:23 crc kubenswrapper[4812]: I0216 13:56:23.593670 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9999e426-9507-4791-8468-ea110c308f85-kube-api-access-cttcb" (OuterVolumeSpecName: "kube-api-access-cttcb") pod "9999e426-9507-4791-8468-ea110c308f85" (UID: "9999e426-9507-4791-8468-ea110c308f85"). InnerVolumeSpecName "kube-api-access-cttcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:56:23 crc kubenswrapper[4812]: I0216 13:56:23.605269 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9999e426-9507-4791-8468-ea110c308f85-scripts" (OuterVolumeSpecName: "scripts") pod "9999e426-9507-4791-8468-ea110c308f85" (UID: "9999e426-9507-4791-8468-ea110c308f85"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:56:23 crc kubenswrapper[4812]: I0216 13:56:23.633014 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bcbbffc-f82a-4bc1-865f-aace29f12e67" (OuterVolumeSpecName: "glance") pod "9999e426-9507-4791-8468-ea110c308f85" (UID: "9999e426-9507-4791-8468-ea110c308f85"). InnerVolumeSpecName "pvc-2bcbbffc-f82a-4bc1-865f-aace29f12e67". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 13:56:23 crc kubenswrapper[4812]: I0216 13:56:23.650775 4812 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9999e426-9507-4791-8468-ea110c308f85-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:23 crc kubenswrapper[4812]: I0216 13:56:23.650819 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cttcb\" (UniqueName: \"kubernetes.io/projected/9999e426-9507-4791-8468-ea110c308f85-kube-api-access-cttcb\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:23 crc kubenswrapper[4812]: I0216 13:56:23.650869 4812 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2bcbbffc-f82a-4bc1-865f-aace29f12e67\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bcbbffc-f82a-4bc1-865f-aace29f12e67\") on node \"crc\" " Feb 16 13:56:23 crc kubenswrapper[4812]: I0216 13:56:23.650885 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9999e426-9507-4791-8468-ea110c308f85-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:23 crc kubenswrapper[4812]: I0216 13:56:23.777938 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9999e426-9507-4791-8468-ea110c308f85-config-data" (OuterVolumeSpecName: "config-data") pod "9999e426-9507-4791-8468-ea110c308f85" (UID: "9999e426-9507-4791-8468-ea110c308f85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:56:23 crc kubenswrapper[4812]: I0216 13:56:23.791661 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9999e426-9507-4791-8468-ea110c308f85-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9999e426-9507-4791-8468-ea110c308f85" (UID: "9999e426-9507-4791-8468-ea110c308f85"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:56:23 crc kubenswrapper[4812]: I0216 13:56:23.858394 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9999e426-9507-4791-8468-ea110c308f85-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:23 crc kubenswrapper[4812]: I0216 13:56:23.858939 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9999e426-9507-4791-8468-ea110c308f85-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:23 crc kubenswrapper[4812]: I0216 13:56:23.884866 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 13:56:23 crc kubenswrapper[4812]: I0216 13:56:23.900229 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86c4db556-7x7cc" Feb 16 13:56:23 crc kubenswrapper[4812]: I0216 13:56:23.932591 4812 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 13:56:23 crc kubenswrapper[4812]: I0216 13:56:23.932893 4812 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2bcbbffc-f82a-4bc1-865f-aace29f12e67" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bcbbffc-f82a-4bc1-865f-aace29f12e67") on node "crc" Feb 16 13:56:23 crc kubenswrapper[4812]: I0216 13:56:23.965719 4812 reconciler_common.go:293] "Volume detached for volume \"pvc-2bcbbffc-f82a-4bc1-865f-aace29f12e67\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bcbbffc-f82a-4bc1-865f-aace29f12e67\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:23 crc kubenswrapper[4812]: I0216 13:56:23.970310 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86c4db556-7x7cc" Feb 16 13:56:23 crc kubenswrapper[4812]: I0216 13:56:23.981363 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9999e426-9507-4791-8468-ea110c308f85","Type":"ContainerDied","Data":"e4475daf6d965adc61138919c8ca058ed1b8b5b8b38823b488ebe00bdea996b1"} Feb 16 13:56:23 crc kubenswrapper[4812]: I0216 13:56:23.981429 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"97b7cfdc-998e-4667-be36-ab781bf0fb41","Type":"ContainerStarted","Data":"8d829c1cbf57ae1ba003a4365985ebe6d38d9d38ab8ce3da2f73c5f0c7e175d0"} Feb 16 13:56:23 crc kubenswrapper[4812]: I0216 13:56:23.981462 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86c4db556-7x7cc" event={"ID":"735893be-02d4-49a0-af55-787ea0f940cb","Type":"ContainerDied","Data":"b4468d66818aa25aa70ac49a22b8682b7d561a74c87f88a13a437d9f3245bd32"} Feb 16 13:56:23 crc kubenswrapper[4812]: I0216 13:56:23.981504 4812 scope.go:117] "RemoveContainer" containerID="966639997bb5b81146597bbb4562f7c4cc69926b4d8b4eb769338b6cc89a729b" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.005069 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9999e426-9507-4791-8468-ea110c308f85-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9999e426-9507-4791-8468-ea110c308f85" (UID: "9999e426-9507-4791-8468-ea110c308f85"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.098953 4812 scope.go:117] "RemoveContainer" containerID="f79d27d5276aeca25764d3d7d4ca2d9a7af51e5d9acbb3ae528f540c15be7e69" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.128144 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.128087737 podStartE2EDuration="4.128087737s" podCreationTimestamp="2026-02-16 13:56:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:56:24.051382268 +0000 UTC m=+1473.115712969" watchObservedRunningTime="2026-02-16 13:56:24.128087737 +0000 UTC m=+1473.192418438" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.138058 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6pb9\" (UniqueName: \"kubernetes.io/projected/735893be-02d4-49a0-af55-787ea0f940cb-kube-api-access-f6pb9\") pod \"735893be-02d4-49a0-af55-787ea0f940cb\" (UID: \"735893be-02d4-49a0-af55-787ea0f940cb\") " Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.138186 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/735893be-02d4-49a0-af55-787ea0f940cb-config\") pod \"735893be-02d4-49a0-af55-787ea0f940cb\" (UID: \"735893be-02d4-49a0-af55-787ea0f940cb\") " Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.138428 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/735893be-02d4-49a0-af55-787ea0f940cb-httpd-config\") pod \"735893be-02d4-49a0-af55-787ea0f940cb\" (UID: \"735893be-02d4-49a0-af55-787ea0f940cb\") " Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.138740 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/735893be-02d4-49a0-af55-787ea0f940cb-combined-ca-bundle\") pod \"735893be-02d4-49a0-af55-787ea0f940cb\" (UID: \"735893be-02d4-49a0-af55-787ea0f940cb\") " Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.138871 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/735893be-02d4-49a0-af55-787ea0f940cb-ovndb-tls-certs\") pod \"735893be-02d4-49a0-af55-787ea0f940cb\" (UID: \"735893be-02d4-49a0-af55-787ea0f940cb\") " Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.155590 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/735893be-02d4-49a0-af55-787ea0f940cb-kube-api-access-f6pb9" (OuterVolumeSpecName: "kube-api-access-f6pb9") pod "735893be-02d4-49a0-af55-787ea0f940cb" (UID: "735893be-02d4-49a0-af55-787ea0f940cb"). InnerVolumeSpecName "kube-api-access-f6pb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.157268 4812 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9999e426-9507-4791-8468-ea110c308f85-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.161985 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/735893be-02d4-49a0-af55-787ea0f940cb-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "735893be-02d4-49a0-af55-787ea0f940cb" (UID: "735893be-02d4-49a0-af55-787ea0f940cb"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.214973 4812 scope.go:117] "RemoveContainer" containerID="f714ab7e99824f80a0244828f5d93b6625f0548c7fe3b9e53c455da66a0a13c9" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.261288 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6pb9\" (UniqueName: \"kubernetes.io/projected/735893be-02d4-49a0-af55-787ea0f940cb-kube-api-access-f6pb9\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.261343 4812 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/735893be-02d4-49a0-af55-787ea0f940cb-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.269727 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/735893be-02d4-49a0-af55-787ea0f940cb-config" (OuterVolumeSpecName: "config") pod "735893be-02d4-49a0-af55-787ea0f940cb" (UID: "735893be-02d4-49a0-af55-787ea0f940cb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.272282 4812 scope.go:117] "RemoveContainer" containerID="6b8ce315f7c192cde51e62a7726d33382abf2bfe0aa63c81508e58d9af332537" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.290974 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/735893be-02d4-49a0-af55-787ea0f940cb-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "735893be-02d4-49a0-af55-787ea0f940cb" (UID: "735893be-02d4-49a0-af55-787ea0f940cb"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.301250 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.325569 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.341109 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/735893be-02d4-49a0-af55-787ea0f940cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "735893be-02d4-49a0-af55-787ea0f940cb" (UID: "735893be-02d4-49a0-af55-787ea0f940cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.346347 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:56:24 crc kubenswrapper[4812]: E0216 13:56:24.347200 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="735893be-02d4-49a0-af55-787ea0f940cb" containerName="neutron-httpd" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.347233 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="735893be-02d4-49a0-af55-787ea0f940cb" containerName="neutron-httpd" Feb 16 13:56:24 crc kubenswrapper[4812]: E0216 13:56:24.347241 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9999e426-9507-4791-8468-ea110c308f85" containerName="glance-httpd" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.347249 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="9999e426-9507-4791-8468-ea110c308f85" containerName="glance-httpd" Feb 16 13:56:24 crc kubenswrapper[4812]: E0216 13:56:24.347274 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9999e426-9507-4791-8468-ea110c308f85" containerName="glance-log" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.347281 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="9999e426-9507-4791-8468-ea110c308f85" containerName="glance-log" Feb 16 13:56:24 crc kubenswrapper[4812]: E0216 13:56:24.347311 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="735893be-02d4-49a0-af55-787ea0f940cb" containerName="neutron-api" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.347317 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="735893be-02d4-49a0-af55-787ea0f940cb" containerName="neutron-api" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.347590 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="735893be-02d4-49a0-af55-787ea0f940cb" containerName="neutron-httpd" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.347610 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="9999e426-9507-4791-8468-ea110c308f85" containerName="glance-httpd" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.347631 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="735893be-02d4-49a0-af55-787ea0f940cb" containerName="neutron-api" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.347642 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="9999e426-9507-4791-8468-ea110c308f85" containerName="glance-log" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.349192 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.353435 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.365400 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.367502 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/735893be-02d4-49a0-af55-787ea0f940cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.367574 4812 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/735893be-02d4-49a0-af55-787ea0f940cb-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.367590 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/735893be-02d4-49a0-af55-787ea0f940cb-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.395160 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.482279 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/906d6897-4bab-46a7-ade3-c5c02bf43c0f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"906d6897-4bab-46a7-ade3-c5c02bf43c0f\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.482663 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2bcbbffc-f82a-4bc1-865f-aace29f12e67\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bcbbffc-f82a-4bc1-865f-aace29f12e67\") pod \"glance-default-internal-api-0\" (UID: \"906d6897-4bab-46a7-ade3-c5c02bf43c0f\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.482791 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/906d6897-4bab-46a7-ade3-c5c02bf43c0f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"906d6897-4bab-46a7-ade3-c5c02bf43c0f\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.483934 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/906d6897-4bab-46a7-ade3-c5c02bf43c0f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"906d6897-4bab-46a7-ade3-c5c02bf43c0f\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.484122 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/906d6897-4bab-46a7-ade3-c5c02bf43c0f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"906d6897-4bab-46a7-ade3-c5c02bf43c0f\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.484373 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/906d6897-4bab-46a7-ade3-c5c02bf43c0f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"906d6897-4bab-46a7-ade3-c5c02bf43c0f\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.484656 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b56ms\" (UniqueName: \"kubernetes.io/projected/906d6897-4bab-46a7-ade3-c5c02bf43c0f-kube-api-access-b56ms\") pod \"glance-default-internal-api-0\" (UID: \"906d6897-4bab-46a7-ade3-c5c02bf43c0f\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.484784 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/906d6897-4bab-46a7-ade3-c5c02bf43c0f-logs\") pod \"glance-default-internal-api-0\" (UID: \"906d6897-4bab-46a7-ade3-c5c02bf43c0f\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.586878 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b56ms\" (UniqueName: \"kubernetes.io/projected/906d6897-4bab-46a7-ade3-c5c02bf43c0f-kube-api-access-b56ms\") pod \"glance-default-internal-api-0\" (UID: \"906d6897-4bab-46a7-ade3-c5c02bf43c0f\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.586954 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/906d6897-4bab-46a7-ade3-c5c02bf43c0f-logs\") pod \"glance-default-internal-api-0\" (UID: \"906d6897-4bab-46a7-ade3-c5c02bf43c0f\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.587005 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/906d6897-4bab-46a7-ade3-c5c02bf43c0f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"906d6897-4bab-46a7-ade3-c5c02bf43c0f\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.587193 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2bcbbffc-f82a-4bc1-865f-aace29f12e67\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bcbbffc-f82a-4bc1-865f-aace29f12e67\") pod \"glance-default-internal-api-0\" (UID: \"906d6897-4bab-46a7-ade3-c5c02bf43c0f\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.587220 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/906d6897-4bab-46a7-ade3-c5c02bf43c0f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"906d6897-4bab-46a7-ade3-c5c02bf43c0f\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.587243 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/906d6897-4bab-46a7-ade3-c5c02bf43c0f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"906d6897-4bab-46a7-ade3-c5c02bf43c0f\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.587302 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/906d6897-4bab-46a7-ade3-c5c02bf43c0f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"906d6897-4bab-46a7-ade3-c5c02bf43c0f\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.587332 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/906d6897-4bab-46a7-ade3-c5c02bf43c0f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"906d6897-4bab-46a7-ade3-c5c02bf43c0f\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.589311 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/906d6897-4bab-46a7-ade3-c5c02bf43c0f-logs\") pod \"glance-default-internal-api-0\" (UID: \"906d6897-4bab-46a7-ade3-c5c02bf43c0f\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.589352 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/906d6897-4bab-46a7-ade3-c5c02bf43c0f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"906d6897-4bab-46a7-ade3-c5c02bf43c0f\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.593478 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/906d6897-4bab-46a7-ade3-c5c02bf43c0f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"906d6897-4bab-46a7-ade3-c5c02bf43c0f\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.594869 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/906d6897-4bab-46a7-ade3-c5c02bf43c0f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"906d6897-4bab-46a7-ade3-c5c02bf43c0f\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.596967 4812 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.597000 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2bcbbffc-f82a-4bc1-865f-aace29f12e67\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bcbbffc-f82a-4bc1-865f-aace29f12e67\") pod \"glance-default-internal-api-0\" (UID: \"906d6897-4bab-46a7-ade3-c5c02bf43c0f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6691502de4876dbd0d40188b23458c72f9080870e675ce533942e270fddd7230/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.599552 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/906d6897-4bab-46a7-ade3-c5c02bf43c0f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"906d6897-4bab-46a7-ade3-c5c02bf43c0f\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.607715 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b56ms\" (UniqueName: \"kubernetes.io/projected/906d6897-4bab-46a7-ade3-c5c02bf43c0f-kube-api-access-b56ms\") pod \"glance-default-internal-api-0\" (UID: \"906d6897-4bab-46a7-ade3-c5c02bf43c0f\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.608048 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/906d6897-4bab-46a7-ade3-c5c02bf43c0f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"906d6897-4bab-46a7-ade3-c5c02bf43c0f\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.668180 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2bcbbffc-f82a-4bc1-865f-aace29f12e67\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bcbbffc-f82a-4bc1-865f-aace29f12e67\") pod \"glance-default-internal-api-0\" (UID: \"906d6897-4bab-46a7-ade3-c5c02bf43c0f\") " pod="openstack/glance-default-internal-api-0" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.686311 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.849953 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-86c4db556-7x7cc"] Feb 16 13:56:24 crc kubenswrapper[4812]: I0216 13:56:24.879936 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-86c4db556-7x7cc"] Feb 16 13:56:25 crc kubenswrapper[4812]: I0216 13:56:25.449278 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 13:56:25 crc kubenswrapper[4812]: I0216 13:56:25.903151 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="735893be-02d4-49a0-af55-787ea0f940cb" path="/var/lib/kubelet/pods/735893be-02d4-49a0-af55-787ea0f940cb/volumes" Feb 16 13:56:25 crc kubenswrapper[4812]: I0216 13:56:25.907097 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9999e426-9507-4791-8468-ea110c308f85" path="/var/lib/kubelet/pods/9999e426-9507-4791-8468-ea110c308f85/volumes" Feb 16 13:56:26 crc kubenswrapper[4812]: I0216 13:56:26.015617 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"906d6897-4bab-46a7-ade3-c5c02bf43c0f","Type":"ContainerStarted","Data":"41fbf21834a959dc85b76d3b0e94ca31eede1b1eef2ab06b8f46d78694896bde"} Feb 16 13:56:27 crc kubenswrapper[4812]: I0216 13:56:27.030574 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"906d6897-4bab-46a7-ade3-c5c02bf43c0f","Type":"ContainerStarted","Data":"bfbdf7a7ba11f7e6d6f8f5171f510e18b5315eefefdc802165642d10a099f9bd"} Feb 16 13:56:27 crc kubenswrapper[4812]: I0216 13:56:27.031569 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"906d6897-4bab-46a7-ade3-c5c02bf43c0f","Type":"ContainerStarted","Data":"6fb4559687222d7add92b5b0e7e3412916b91cf198142cdb7cbf13f97de180da"} Feb 16 13:56:27 crc kubenswrapper[4812]: I0216 13:56:27.035025 4812 generic.go:334] "Generic (PLEG): container finished" podID="97982589-9f36-48d3-929e-d6f0d2b83a3b" containerID="5b04b8705207f032c0b920d34c5c7af28f4a8e3af57a3ea22c2a52fbf1b98aff" exitCode=0 Feb 16 13:56:27 crc kubenswrapper[4812]: I0216 13:56:27.035070 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"97982589-9f36-48d3-929e-d6f0d2b83a3b","Type":"ContainerDied","Data":"5b04b8705207f032c0b920d34c5c7af28f4a8e3af57a3ea22c2a52fbf1b98aff"} Feb 16 13:56:27 crc kubenswrapper[4812]: I0216 13:56:27.053836 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.053804368 podStartE2EDuration="3.053804368s" podCreationTimestamp="2026-02-16 13:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:56:27.051853612 +0000 UTC m=+1476.116184333" watchObservedRunningTime="2026-02-16 13:56:27.053804368 +0000 UTC m=+1476.118135069" Feb 16 13:56:30 crc kubenswrapper[4812]: I0216 13:56:30.686347 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 13:56:30 crc kubenswrapper[4812]: I0216 13:56:30.686916 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 13:56:30 crc kubenswrapper[4812]: I0216 13:56:30.721317 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 13:56:30 crc kubenswrapper[4812]: I0216 13:56:30.730243 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 13:56:31 crc kubenswrapper[4812]: I0216 13:56:31.858942 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 13:56:31 crc kubenswrapper[4812]: I0216 13:56:31.858989 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 13:56:33 crc kubenswrapper[4812]: E0216 13:56:33.881791 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 13:56:34 crc kubenswrapper[4812]: I0216 13:56:34.475575 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 13:56:34 crc kubenswrapper[4812]: I0216 13:56:34.475974 4812 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 13:56:34 crc kubenswrapper[4812]: I0216 13:56:34.497990 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 13:56:34 crc kubenswrapper[4812]: I0216 13:56:34.844810 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 13:56:34 crc kubenswrapper[4812]: I0216 13:56:34.846866 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 13:56:34 crc kubenswrapper[4812]: I0216 13:56:34.857015 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="97982589-9f36-48d3-929e-d6f0d2b83a3b" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 13:56:34 crc kubenswrapper[4812]: I0216 13:56:34.956695 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-q6dqv"] Feb 16 13:56:34 crc kubenswrapper[4812]: I0216 13:56:34.958198 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-q6dqv" Feb 16 13:56:34 crc kubenswrapper[4812]: I0216 13:56:34.989844 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-q6dqv"] Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.000886 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.006913 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.063857 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1622167e-23ac-4689-8708-02bfe0050250-operator-scripts\") pod \"nova-api-db-create-q6dqv\" (UID: \"1622167e-23ac-4689-8708-02bfe0050250\") " pod="openstack/nova-api-db-create-q6dqv" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.064120 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk9xt\" (UniqueName: \"kubernetes.io/projected/1622167e-23ac-4689-8708-02bfe0050250-kube-api-access-xk9xt\") pod \"nova-api-db-create-q6dqv\" (UID: \"1622167e-23ac-4689-8708-02bfe0050250\") " pod="openstack/nova-api-db-create-q6dqv" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.153968 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-6dhdk"] Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.155545 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6dhdk" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.165912 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk9xt\" (UniqueName: \"kubernetes.io/projected/1622167e-23ac-4689-8708-02bfe0050250-kube-api-access-xk9xt\") pod \"nova-api-db-create-q6dqv\" (UID: \"1622167e-23ac-4689-8708-02bfe0050250\") " pod="openstack/nova-api-db-create-q6dqv" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.166088 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1622167e-23ac-4689-8708-02bfe0050250-operator-scripts\") pod \"nova-api-db-create-q6dqv\" (UID: \"1622167e-23ac-4689-8708-02bfe0050250\") " pod="openstack/nova-api-db-create-q6dqv" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.166943 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1622167e-23ac-4689-8708-02bfe0050250-operator-scripts\") pod \"nova-api-db-create-q6dqv\" (UID: \"1622167e-23ac-4689-8708-02bfe0050250\") " pod="openstack/nova-api-db-create-q6dqv" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.169408 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-6dhdk"] Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.196500 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk9xt\" (UniqueName: \"kubernetes.io/projected/1622167e-23ac-4689-8708-02bfe0050250-kube-api-access-xk9xt\") pod \"nova-api-db-create-q6dqv\" (UID: \"1622167e-23ac-4689-8708-02bfe0050250\") " pod="openstack/nova-api-db-create-q6dqv" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.248364 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-wwcfc"] Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.279763 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5pbr\" (UniqueName: \"kubernetes.io/projected/99792a16-b3c8-4956-9f97-0c64ad3f97d3-kube-api-access-h5pbr\") pod \"nova-cell0-db-create-6dhdk\" (UID: \"99792a16-b3c8-4956-9f97-0c64ad3f97d3\") " pod="openstack/nova-cell0-db-create-6dhdk" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.279879 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99792a16-b3c8-4956-9f97-0c64ad3f97d3-operator-scripts\") pod \"nova-cell0-db-create-6dhdk\" (UID: \"99792a16-b3c8-4956-9f97-0c64ad3f97d3\") " pod="openstack/nova-cell0-db-create-6dhdk" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.285244 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-wwcfc" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.294198 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-q6dqv" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.321402 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-wwcfc"] Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.335729 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-8835-account-create-update-tnlkp"] Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.337419 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8835-account-create-update-tnlkp" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.341050 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.360633 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-8835-account-create-update-tnlkp"] Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.385045 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5pbr\" (UniqueName: \"kubernetes.io/projected/99792a16-b3c8-4956-9f97-0c64ad3f97d3-kube-api-access-h5pbr\") pod \"nova-cell0-db-create-6dhdk\" (UID: \"99792a16-b3c8-4956-9f97-0c64ad3f97d3\") " pod="openstack/nova-cell0-db-create-6dhdk" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.385373 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99792a16-b3c8-4956-9f97-0c64ad3f97d3-operator-scripts\") pod \"nova-cell0-db-create-6dhdk\" (UID: \"99792a16-b3c8-4956-9f97-0c64ad3f97d3\") " pod="openstack/nova-cell0-db-create-6dhdk" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.385569 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c3e6add-a453-46a2-b3ef-4c92d6c2426a-operator-scripts\") pod \"nova-cell1-db-create-wwcfc\" (UID: \"3c3e6add-a453-46a2-b3ef-4c92d6c2426a\") " pod="openstack/nova-cell1-db-create-wwcfc" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.385599 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79sk6\" (UniqueName: \"kubernetes.io/projected/3c3e6add-a453-46a2-b3ef-4c92d6c2426a-kube-api-access-79sk6\") pod \"nova-cell1-db-create-wwcfc\" (UID: \"3c3e6add-a453-46a2-b3ef-4c92d6c2426a\") " pod="openstack/nova-cell1-db-create-wwcfc" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.386357 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99792a16-b3c8-4956-9f97-0c64ad3f97d3-operator-scripts\") pod \"nova-cell0-db-create-6dhdk\" (UID: \"99792a16-b3c8-4956-9f97-0c64ad3f97d3\") " pod="openstack/nova-cell0-db-create-6dhdk" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.422858 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5pbr\" (UniqueName: \"kubernetes.io/projected/99792a16-b3c8-4956-9f97-0c64ad3f97d3-kube-api-access-h5pbr\") pod \"nova-cell0-db-create-6dhdk\" (UID: \"99792a16-b3c8-4956-9f97-0c64ad3f97d3\") " pod="openstack/nova-cell0-db-create-6dhdk" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.454057 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-2f15-account-create-update-kzw5k"] Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.455767 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2f15-account-create-update-kzw5k" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.461723 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.472674 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-2f15-account-create-update-kzw5k"] Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.476499 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6dhdk" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.490663 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79sk6\" (UniqueName: \"kubernetes.io/projected/3c3e6add-a453-46a2-b3ef-4c92d6c2426a-kube-api-access-79sk6\") pod \"nova-cell1-db-create-wwcfc\" (UID: \"3c3e6add-a453-46a2-b3ef-4c92d6c2426a\") " pod="openstack/nova-cell1-db-create-wwcfc" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.491185 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4srq7\" (UniqueName: \"kubernetes.io/projected/78d112fe-cdc5-4d0e-8636-49878e3888d9-kube-api-access-4srq7\") pod \"nova-api-8835-account-create-update-tnlkp\" (UID: \"78d112fe-cdc5-4d0e-8636-49878e3888d9\") " pod="openstack/nova-api-8835-account-create-update-tnlkp" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.491244 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78d112fe-cdc5-4d0e-8636-49878e3888d9-operator-scripts\") pod \"nova-api-8835-account-create-update-tnlkp\" (UID: \"78d112fe-cdc5-4d0e-8636-49878e3888d9\") " pod="openstack/nova-api-8835-account-create-update-tnlkp" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.491511 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c3e6add-a453-46a2-b3ef-4c92d6c2426a-operator-scripts\") pod \"nova-cell1-db-create-wwcfc\" (UID: \"3c3e6add-a453-46a2-b3ef-4c92d6c2426a\") " pod="openstack/nova-cell1-db-create-wwcfc" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.492420 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c3e6add-a453-46a2-b3ef-4c92d6c2426a-operator-scripts\") pod \"nova-cell1-db-create-wwcfc\" (UID: \"3c3e6add-a453-46a2-b3ef-4c92d6c2426a\") " pod="openstack/nova-cell1-db-create-wwcfc" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.543433 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79sk6\" (UniqueName: \"kubernetes.io/projected/3c3e6add-a453-46a2-b3ef-4c92d6c2426a-kube-api-access-79sk6\") pod \"nova-cell1-db-create-wwcfc\" (UID: \"3c3e6add-a453-46a2-b3ef-4c92d6c2426a\") " pod="openstack/nova-cell1-db-create-wwcfc" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.583855 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-1437-account-create-update-xp5qz"] Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.585833 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1437-account-create-update-xp5qz" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.593240 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.594093 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn4js\" (UniqueName: \"kubernetes.io/projected/124644b5-886b-4bd1-af08-1ddc88e0ac9d-kube-api-access-tn4js\") pod \"nova-cell0-2f15-account-create-update-kzw5k\" (UID: \"124644b5-886b-4bd1-af08-1ddc88e0ac9d\") " pod="openstack/nova-cell0-2f15-account-create-update-kzw5k" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.594261 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/124644b5-886b-4bd1-af08-1ddc88e0ac9d-operator-scripts\") pod \"nova-cell0-2f15-account-create-update-kzw5k\" (UID: \"124644b5-886b-4bd1-af08-1ddc88e0ac9d\") " pod="openstack/nova-cell0-2f15-account-create-update-kzw5k" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.599047 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-1437-account-create-update-xp5qz"] Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.602708 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4srq7\" (UniqueName: \"kubernetes.io/projected/78d112fe-cdc5-4d0e-8636-49878e3888d9-kube-api-access-4srq7\") pod \"nova-api-8835-account-create-update-tnlkp\" (UID: \"78d112fe-cdc5-4d0e-8636-49878e3888d9\") " pod="openstack/nova-api-8835-account-create-update-tnlkp" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.602828 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78d112fe-cdc5-4d0e-8636-49878e3888d9-operator-scripts\") pod \"nova-api-8835-account-create-update-tnlkp\" (UID: \"78d112fe-cdc5-4d0e-8636-49878e3888d9\") " pod="openstack/nova-api-8835-account-create-update-tnlkp" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.603899 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78d112fe-cdc5-4d0e-8636-49878e3888d9-operator-scripts\") pod \"nova-api-8835-account-create-update-tnlkp\" (UID: \"78d112fe-cdc5-4d0e-8636-49878e3888d9\") " pod="openstack/nova-api-8835-account-create-update-tnlkp" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.643125 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4srq7\" (UniqueName: \"kubernetes.io/projected/78d112fe-cdc5-4d0e-8636-49878e3888d9-kube-api-access-4srq7\") pod \"nova-api-8835-account-create-update-tnlkp\" (UID: \"78d112fe-cdc5-4d0e-8636-49878e3888d9\") " pod="openstack/nova-api-8835-account-create-update-tnlkp" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.708600 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2196ced1-8ac4-4012-8791-b9487350bd38-operator-scripts\") pod \"nova-cell1-1437-account-create-update-xp5qz\" (UID: \"2196ced1-8ac4-4012-8791-b9487350bd38\") " pod="openstack/nova-cell1-1437-account-create-update-xp5qz" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.708696 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tn4js\" (UniqueName: \"kubernetes.io/projected/124644b5-886b-4bd1-af08-1ddc88e0ac9d-kube-api-access-tn4js\") pod \"nova-cell0-2f15-account-create-update-kzw5k\" (UID: \"124644b5-886b-4bd1-af08-1ddc88e0ac9d\") " pod="openstack/nova-cell0-2f15-account-create-update-kzw5k" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.708747 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/124644b5-886b-4bd1-af08-1ddc88e0ac9d-operator-scripts\") pod \"nova-cell0-2f15-account-create-update-kzw5k\" (UID: \"124644b5-886b-4bd1-af08-1ddc88e0ac9d\") " pod="openstack/nova-cell0-2f15-account-create-update-kzw5k" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.708837 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk95j\" (UniqueName: \"kubernetes.io/projected/2196ced1-8ac4-4012-8791-b9487350bd38-kube-api-access-sk95j\") pod \"nova-cell1-1437-account-create-update-xp5qz\" (UID: \"2196ced1-8ac4-4012-8791-b9487350bd38\") " pod="openstack/nova-cell1-1437-account-create-update-xp5qz" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.709787 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/124644b5-886b-4bd1-af08-1ddc88e0ac9d-operator-scripts\") pod \"nova-cell0-2f15-account-create-update-kzw5k\" (UID: \"124644b5-886b-4bd1-af08-1ddc88e0ac9d\") " pod="openstack/nova-cell0-2f15-account-create-update-kzw5k" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.732994 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn4js\" (UniqueName: \"kubernetes.io/projected/124644b5-886b-4bd1-af08-1ddc88e0ac9d-kube-api-access-tn4js\") pod \"nova-cell0-2f15-account-create-update-kzw5k\" (UID: \"124644b5-886b-4bd1-af08-1ddc88e0ac9d\") " pod="openstack/nova-cell0-2f15-account-create-update-kzw5k" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.810641 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk95j\" (UniqueName: \"kubernetes.io/projected/2196ced1-8ac4-4012-8791-b9487350bd38-kube-api-access-sk95j\") pod \"nova-cell1-1437-account-create-update-xp5qz\" (UID: \"2196ced1-8ac4-4012-8791-b9487350bd38\") " pod="openstack/nova-cell1-1437-account-create-update-xp5qz" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.810884 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2196ced1-8ac4-4012-8791-b9487350bd38-operator-scripts\") pod \"nova-cell1-1437-account-create-update-xp5qz\" (UID: \"2196ced1-8ac4-4012-8791-b9487350bd38\") " pod="openstack/nova-cell1-1437-account-create-update-xp5qz" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.811988 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2196ced1-8ac4-4012-8791-b9487350bd38-operator-scripts\") pod \"nova-cell1-1437-account-create-update-xp5qz\" (UID: \"2196ced1-8ac4-4012-8791-b9487350bd38\") " pod="openstack/nova-cell1-1437-account-create-update-xp5qz" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.821911 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-wwcfc" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.838732 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk95j\" (UniqueName: \"kubernetes.io/projected/2196ced1-8ac4-4012-8791-b9487350bd38-kube-api-access-sk95j\") pod \"nova-cell1-1437-account-create-update-xp5qz\" (UID: \"2196ced1-8ac4-4012-8791-b9487350bd38\") " pod="openstack/nova-cell1-1437-account-create-update-xp5qz" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.914558 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8835-account-create-update-tnlkp" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.937408 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2f15-account-create-update-kzw5k" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.942237 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-q6dqv"] Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.943459 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.944421 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 13:56:35 crc kubenswrapper[4812]: I0216 13:56:35.955201 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1437-account-create-update-xp5qz" Feb 16 13:56:36 crc kubenswrapper[4812]: I0216 13:56:36.131589 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-6dhdk"] Feb 16 13:56:36 crc kubenswrapper[4812]: I0216 13:56:36.425596 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-wwcfc"] Feb 16 13:56:36 crc kubenswrapper[4812]: I0216 13:56:36.962930 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-8835-account-create-update-tnlkp"] Feb 16 13:56:36 crc kubenswrapper[4812]: W0216 13:56:36.963505 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78d112fe_cdc5_4d0e_8636_49878e3888d9.slice/crio-8e57d032e06151c67933a568315a4dbbf43142413ce284efe6b00f093d1c9433 WatchSource:0}: Error finding container 8e57d032e06151c67933a568315a4dbbf43142413ce284efe6b00f093d1c9433: Status 404 returned error can't find the container with id 8e57d032e06151c67933a568315a4dbbf43142413ce284efe6b00f093d1c9433 Feb 16 13:56:36 crc kubenswrapper[4812]: I0216 13:56:36.968532 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-wwcfc" event={"ID":"3c3e6add-a453-46a2-b3ef-4c92d6c2426a","Type":"ContainerStarted","Data":"e1ede15a98b96250acf05fdeea17efa9b1b5467727999d3424899e30f915ff10"} Feb 16 13:56:36 crc kubenswrapper[4812]: I0216 13:56:36.968595 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-wwcfc" event={"ID":"3c3e6add-a453-46a2-b3ef-4c92d6c2426a","Type":"ContainerStarted","Data":"a8e4c10fc7e3a1a12a5cbc5e452776969f2592e92063e0bb654967640dd55349"} Feb 16 13:56:36 crc kubenswrapper[4812]: I0216 13:56:36.971166 4812 generic.go:334] "Generic (PLEG): container finished" podID="1622167e-23ac-4689-8708-02bfe0050250" containerID="1e27c8856c5b857d056062f09160cbc3743ded64f797ed789869bcb56a775c50" exitCode=0 Feb 16 13:56:36 crc kubenswrapper[4812]: I0216 13:56:36.971237 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-q6dqv" event={"ID":"1622167e-23ac-4689-8708-02bfe0050250","Type":"ContainerDied","Data":"1e27c8856c5b857d056062f09160cbc3743ded64f797ed789869bcb56a775c50"} Feb 16 13:56:36 crc kubenswrapper[4812]: I0216 13:56:36.971281 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-q6dqv" event={"ID":"1622167e-23ac-4689-8708-02bfe0050250","Type":"ContainerStarted","Data":"3b3867d1df328ab03b860e324e8e9159f4a1dc79de17ff62c64ccba0dc365122"} Feb 16 13:56:36 crc kubenswrapper[4812]: I0216 13:56:36.983657 4812 generic.go:334] "Generic (PLEG): container finished" podID="99792a16-b3c8-4956-9f97-0c64ad3f97d3" containerID="736ce3c115f45b44c8dd75eded7a1a2338b68279f8f3dd04af39b3dd25327e65" exitCode=0 Feb 16 13:56:36 crc kubenswrapper[4812]: I0216 13:56:36.984989 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-2f15-account-create-update-kzw5k"] Feb 16 13:56:36 crc kubenswrapper[4812]: I0216 13:56:36.985027 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6dhdk" event={"ID":"99792a16-b3c8-4956-9f97-0c64ad3f97d3","Type":"ContainerDied","Data":"736ce3c115f45b44c8dd75eded7a1a2338b68279f8f3dd04af39b3dd25327e65"} Feb 16 13:56:36 crc kubenswrapper[4812]: I0216 13:56:36.985047 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6dhdk" event={"ID":"99792a16-b3c8-4956-9f97-0c64ad3f97d3","Type":"ContainerStarted","Data":"5750916bdc66866190185521b671695b418100d9ff4bd1a258d2df6cd668c479"} Feb 16 13:56:36 crc kubenswrapper[4812]: W0216 13:56:36.990030 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod124644b5_886b_4bd1_af08_1ddc88e0ac9d.slice/crio-4003e41bfb61380c475902603698846e491cab981701f88c862db93e633c9077 WatchSource:0}: Error finding container 4003e41bfb61380c475902603698846e491cab981701f88c862db93e633c9077: Status 404 returned error can't find the container with id 4003e41bfb61380c475902603698846e491cab981701f88c862db93e633c9077 Feb 16 13:56:37 crc kubenswrapper[4812]: I0216 13:56:37.007342 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-1437-account-create-update-xp5qz"] Feb 16 13:56:37 crc kubenswrapper[4812]: I0216 13:56:37.013580 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-wwcfc" podStartSLOduration=2.013547376 podStartE2EDuration="2.013547376s" podCreationTimestamp="2026-02-16 13:56:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:56:36.983852652 +0000 UTC m=+1486.048183363" watchObservedRunningTime="2026-02-16 13:56:37.013547376 +0000 UTC m=+1486.077878077" Feb 16 13:56:38 crc kubenswrapper[4812]: I0216 13:56:38.000397 4812 generic.go:334] "Generic (PLEG): container finished" podID="78d112fe-cdc5-4d0e-8636-49878e3888d9" containerID="74f83999946ea05dd0befcb351e0d879a5b07dc9a50d040340e2f6d02c535073" exitCode=0 Feb 16 13:56:38 crc kubenswrapper[4812]: I0216 13:56:38.000997 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8835-account-create-update-tnlkp" event={"ID":"78d112fe-cdc5-4d0e-8636-49878e3888d9","Type":"ContainerDied","Data":"74f83999946ea05dd0befcb351e0d879a5b07dc9a50d040340e2f6d02c535073"} Feb 16 13:56:38 crc kubenswrapper[4812]: I0216 13:56:38.001080 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8835-account-create-update-tnlkp" event={"ID":"78d112fe-cdc5-4d0e-8636-49878e3888d9","Type":"ContainerStarted","Data":"8e57d032e06151c67933a568315a4dbbf43142413ce284efe6b00f093d1c9433"} Feb 16 13:56:38 crc kubenswrapper[4812]: I0216 13:56:38.005915 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1437-account-create-update-xp5qz" event={"ID":"2196ced1-8ac4-4012-8791-b9487350bd38","Type":"ContainerStarted","Data":"28b7d53e73afee19fcb85699025053acf8ecca0824ae02179b102458d4dcf726"} Feb 16 13:56:38 crc kubenswrapper[4812]: I0216 13:56:38.005979 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1437-account-create-update-xp5qz" event={"ID":"2196ced1-8ac4-4012-8791-b9487350bd38","Type":"ContainerStarted","Data":"fc78cc1725bdfd8439f38c7e2993a0c987ba017d47c3ea3823bae2d396477a27"} Feb 16 13:56:38 crc kubenswrapper[4812]: I0216 13:56:38.011986 4812 generic.go:334] "Generic (PLEG): container finished" podID="3c3e6add-a453-46a2-b3ef-4c92d6c2426a" containerID="e1ede15a98b96250acf05fdeea17efa9b1b5467727999d3424899e30f915ff10" exitCode=0 Feb 16 13:56:38 crc kubenswrapper[4812]: I0216 13:56:38.012172 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-wwcfc" event={"ID":"3c3e6add-a453-46a2-b3ef-4c92d6c2426a","Type":"ContainerDied","Data":"e1ede15a98b96250acf05fdeea17efa9b1b5467727999d3424899e30f915ff10"} Feb 16 13:56:38 crc kubenswrapper[4812]: I0216 13:56:38.015757 4812 generic.go:334] "Generic (PLEG): container finished" podID="124644b5-886b-4bd1-af08-1ddc88e0ac9d" containerID="865a3cd7799f0720376a5a17a3b737384ca24fe5013f9c4df2333093bccc22b5" exitCode=0 Feb 16 13:56:38 crc kubenswrapper[4812]: I0216 13:56:38.016049 4812 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 13:56:38 crc kubenswrapper[4812]: I0216 13:56:38.016092 4812 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 13:56:38 crc kubenswrapper[4812]: I0216 13:56:38.016135 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2f15-account-create-update-kzw5k" event={"ID":"124644b5-886b-4bd1-af08-1ddc88e0ac9d","Type":"ContainerDied","Data":"865a3cd7799f0720376a5a17a3b737384ca24fe5013f9c4df2333093bccc22b5"} Feb 16 13:56:38 crc kubenswrapper[4812]: I0216 13:56:38.016183 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2f15-account-create-update-kzw5k" event={"ID":"124644b5-886b-4bd1-af08-1ddc88e0ac9d","Type":"ContainerStarted","Data":"4003e41bfb61380c475902603698846e491cab981701f88c862db93e633c9077"} Feb 16 13:56:38 crc kubenswrapper[4812]: I0216 13:56:38.061598 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-1437-account-create-update-xp5qz" podStartSLOduration=3.061572059 podStartE2EDuration="3.061572059s" podCreationTimestamp="2026-02-16 13:56:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:56:38.057576612 +0000 UTC m=+1487.121907313" watchObservedRunningTime="2026-02-16 13:56:38.061572059 +0000 UTC m=+1487.125902760" Feb 16 13:56:38 crc kubenswrapper[4812]: I0216 13:56:38.660944 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-q6dqv" Feb 16 13:56:38 crc kubenswrapper[4812]: I0216 13:56:38.667856 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6dhdk" Feb 16 13:56:38 crc kubenswrapper[4812]: I0216 13:56:38.791799 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xk9xt\" (UniqueName: \"kubernetes.io/projected/1622167e-23ac-4689-8708-02bfe0050250-kube-api-access-xk9xt\") pod \"1622167e-23ac-4689-8708-02bfe0050250\" (UID: \"1622167e-23ac-4689-8708-02bfe0050250\") " Feb 16 13:56:38 crc kubenswrapper[4812]: I0216 13:56:38.791898 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5pbr\" (UniqueName: \"kubernetes.io/projected/99792a16-b3c8-4956-9f97-0c64ad3f97d3-kube-api-access-h5pbr\") pod \"99792a16-b3c8-4956-9f97-0c64ad3f97d3\" (UID: \"99792a16-b3c8-4956-9f97-0c64ad3f97d3\") " Feb 16 13:56:38 crc kubenswrapper[4812]: I0216 13:56:38.791985 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99792a16-b3c8-4956-9f97-0c64ad3f97d3-operator-scripts\") pod \"99792a16-b3c8-4956-9f97-0c64ad3f97d3\" (UID: \"99792a16-b3c8-4956-9f97-0c64ad3f97d3\") " Feb 16 13:56:38 crc kubenswrapper[4812]: I0216 13:56:38.792041 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1622167e-23ac-4689-8708-02bfe0050250-operator-scripts\") pod \"1622167e-23ac-4689-8708-02bfe0050250\" (UID: \"1622167e-23ac-4689-8708-02bfe0050250\") " Feb 16 13:56:38 crc kubenswrapper[4812]: I0216 13:56:38.792692 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99792a16-b3c8-4956-9f97-0c64ad3f97d3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "99792a16-b3c8-4956-9f97-0c64ad3f97d3" (UID: "99792a16-b3c8-4956-9f97-0c64ad3f97d3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:56:38 crc kubenswrapper[4812]: I0216 13:56:38.792839 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1622167e-23ac-4689-8708-02bfe0050250-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1622167e-23ac-4689-8708-02bfe0050250" (UID: "1622167e-23ac-4689-8708-02bfe0050250"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:56:38 crc kubenswrapper[4812]: I0216 13:56:38.798060 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99792a16-b3c8-4956-9f97-0c64ad3f97d3-kube-api-access-h5pbr" (OuterVolumeSpecName: "kube-api-access-h5pbr") pod "99792a16-b3c8-4956-9f97-0c64ad3f97d3" (UID: "99792a16-b3c8-4956-9f97-0c64ad3f97d3"). InnerVolumeSpecName "kube-api-access-h5pbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:56:38 crc kubenswrapper[4812]: I0216 13:56:38.799758 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1622167e-23ac-4689-8708-02bfe0050250-kube-api-access-xk9xt" (OuterVolumeSpecName: "kube-api-access-xk9xt") pod "1622167e-23ac-4689-8708-02bfe0050250" (UID: "1622167e-23ac-4689-8708-02bfe0050250"). InnerVolumeSpecName "kube-api-access-xk9xt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:56:38 crc kubenswrapper[4812]: I0216 13:56:38.894673 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5pbr\" (UniqueName: \"kubernetes.io/projected/99792a16-b3c8-4956-9f97-0c64ad3f97d3-kube-api-access-h5pbr\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:38 crc kubenswrapper[4812]: I0216 13:56:38.894707 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99792a16-b3c8-4956-9f97-0c64ad3f97d3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:38 crc kubenswrapper[4812]: I0216 13:56:38.894717 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1622167e-23ac-4689-8708-02bfe0050250-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:38 crc kubenswrapper[4812]: I0216 13:56:38.894727 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xk9xt\" (UniqueName: \"kubernetes.io/projected/1622167e-23ac-4689-8708-02bfe0050250-kube-api-access-xk9xt\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:39 crc kubenswrapper[4812]: I0216 13:56:39.028745 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6dhdk" Feb 16 13:56:39 crc kubenswrapper[4812]: I0216 13:56:39.028772 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6dhdk" event={"ID":"99792a16-b3c8-4956-9f97-0c64ad3f97d3","Type":"ContainerDied","Data":"5750916bdc66866190185521b671695b418100d9ff4bd1a258d2df6cd668c479"} Feb 16 13:56:39 crc kubenswrapper[4812]: I0216 13:56:39.029171 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5750916bdc66866190185521b671695b418100d9ff4bd1a258d2df6cd668c479" Feb 16 13:56:39 crc kubenswrapper[4812]: I0216 13:56:39.031288 4812 generic.go:334] "Generic (PLEG): container finished" podID="2196ced1-8ac4-4012-8791-b9487350bd38" containerID="28b7d53e73afee19fcb85699025053acf8ecca0824ae02179b102458d4dcf726" exitCode=0 Feb 16 13:56:39 crc kubenswrapper[4812]: I0216 13:56:39.031336 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1437-account-create-update-xp5qz" event={"ID":"2196ced1-8ac4-4012-8791-b9487350bd38","Type":"ContainerDied","Data":"28b7d53e73afee19fcb85699025053acf8ecca0824ae02179b102458d4dcf726"} Feb 16 13:56:39 crc kubenswrapper[4812]: I0216 13:56:39.033586 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-q6dqv" event={"ID":"1622167e-23ac-4689-8708-02bfe0050250","Type":"ContainerDied","Data":"3b3867d1df328ab03b860e324e8e9159f4a1dc79de17ff62c64ccba0dc365122"} Feb 16 13:56:39 crc kubenswrapper[4812]: I0216 13:56:39.033653 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b3867d1df328ab03b860e324e8e9159f4a1dc79de17ff62c64ccba0dc365122" Feb 16 13:56:39 crc kubenswrapper[4812]: I0216 13:56:39.033679 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-q6dqv" Feb 16 13:56:39 crc kubenswrapper[4812]: I0216 13:56:39.082811 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 13:56:39 crc kubenswrapper[4812]: I0216 13:56:39.082933 4812 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 13:56:39 crc kubenswrapper[4812]: I0216 13:56:39.144266 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 13:56:39 crc kubenswrapper[4812]: I0216 13:56:39.900910 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2f15-account-create-update-kzw5k" Feb 16 13:56:39 crc kubenswrapper[4812]: I0216 13:56:39.914690 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8835-account-create-update-tnlkp" Feb 16 13:56:39 crc kubenswrapper[4812]: I0216 13:56:39.929746 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-wwcfc" Feb 16 13:56:39 crc kubenswrapper[4812]: I0216 13:56:39.985901 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c3e6add-a453-46a2-b3ef-4c92d6c2426a-operator-scripts\") pod \"3c3e6add-a453-46a2-b3ef-4c92d6c2426a\" (UID: \"3c3e6add-a453-46a2-b3ef-4c92d6c2426a\") " Feb 16 13:56:39 crc kubenswrapper[4812]: I0216 13:56:39.986288 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4srq7\" (UniqueName: \"kubernetes.io/projected/78d112fe-cdc5-4d0e-8636-49878e3888d9-kube-api-access-4srq7\") pod \"78d112fe-cdc5-4d0e-8636-49878e3888d9\" (UID: \"78d112fe-cdc5-4d0e-8636-49878e3888d9\") " Feb 16 13:56:39 crc kubenswrapper[4812]: I0216 13:56:39.986325 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79sk6\" (UniqueName: \"kubernetes.io/projected/3c3e6add-a453-46a2-b3ef-4c92d6c2426a-kube-api-access-79sk6\") pod \"3c3e6add-a453-46a2-b3ef-4c92d6c2426a\" (UID: \"3c3e6add-a453-46a2-b3ef-4c92d6c2426a\") " Feb 16 13:56:39 crc kubenswrapper[4812]: I0216 13:56:39.986350 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c3e6add-a453-46a2-b3ef-4c92d6c2426a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3c3e6add-a453-46a2-b3ef-4c92d6c2426a" (UID: "3c3e6add-a453-46a2-b3ef-4c92d6c2426a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:56:39 crc kubenswrapper[4812]: I0216 13:56:39.986408 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78d112fe-cdc5-4d0e-8636-49878e3888d9-operator-scripts\") pod \"78d112fe-cdc5-4d0e-8636-49878e3888d9\" (UID: \"78d112fe-cdc5-4d0e-8636-49878e3888d9\") " Feb 16 13:56:39 crc kubenswrapper[4812]: I0216 13:56:39.987369 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78d112fe-cdc5-4d0e-8636-49878e3888d9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "78d112fe-cdc5-4d0e-8636-49878e3888d9" (UID: "78d112fe-cdc5-4d0e-8636-49878e3888d9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:56:39 crc kubenswrapper[4812]: I0216 13:56:39.987541 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c3e6add-a453-46a2-b3ef-4c92d6c2426a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:40 crc kubenswrapper[4812]: I0216 13:56:40.005994 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78d112fe-cdc5-4d0e-8636-49878e3888d9-kube-api-access-4srq7" (OuterVolumeSpecName: "kube-api-access-4srq7") pod "78d112fe-cdc5-4d0e-8636-49878e3888d9" (UID: "78d112fe-cdc5-4d0e-8636-49878e3888d9"). InnerVolumeSpecName "kube-api-access-4srq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:56:40 crc kubenswrapper[4812]: I0216 13:56:40.008287 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c3e6add-a453-46a2-b3ef-4c92d6c2426a-kube-api-access-79sk6" (OuterVolumeSpecName: "kube-api-access-79sk6") pod "3c3e6add-a453-46a2-b3ef-4c92d6c2426a" (UID: "3c3e6add-a453-46a2-b3ef-4c92d6c2426a"). InnerVolumeSpecName "kube-api-access-79sk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:56:40 crc kubenswrapper[4812]: I0216 13:56:40.046020 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2f15-account-create-update-kzw5k" event={"ID":"124644b5-886b-4bd1-af08-1ddc88e0ac9d","Type":"ContainerDied","Data":"4003e41bfb61380c475902603698846e491cab981701f88c862db93e633c9077"} Feb 16 13:56:40 crc kubenswrapper[4812]: I0216 13:56:40.046075 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4003e41bfb61380c475902603698846e491cab981701f88c862db93e633c9077" Feb 16 13:56:40 crc kubenswrapper[4812]: I0216 13:56:40.046146 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2f15-account-create-update-kzw5k" Feb 16 13:56:40 crc kubenswrapper[4812]: I0216 13:56:40.048067 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8835-account-create-update-tnlkp" Feb 16 13:56:40 crc kubenswrapper[4812]: I0216 13:56:40.048044 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8835-account-create-update-tnlkp" event={"ID":"78d112fe-cdc5-4d0e-8636-49878e3888d9","Type":"ContainerDied","Data":"8e57d032e06151c67933a568315a4dbbf43142413ce284efe6b00f093d1c9433"} Feb 16 13:56:40 crc kubenswrapper[4812]: I0216 13:56:40.048206 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e57d032e06151c67933a568315a4dbbf43142413ce284efe6b00f093d1c9433" Feb 16 13:56:40 crc kubenswrapper[4812]: I0216 13:56:40.051027 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-wwcfc" event={"ID":"3c3e6add-a453-46a2-b3ef-4c92d6c2426a","Type":"ContainerDied","Data":"a8e4c10fc7e3a1a12a5cbc5e452776969f2592e92063e0bb654967640dd55349"} Feb 16 13:56:40 crc kubenswrapper[4812]: I0216 13:56:40.051083 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8e4c10fc7e3a1a12a5cbc5e452776969f2592e92063e0bb654967640dd55349" Feb 16 13:56:40 crc kubenswrapper[4812]: I0216 13:56:40.051084 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-wwcfc" Feb 16 13:56:40 crc kubenswrapper[4812]: I0216 13:56:40.088547 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/124644b5-886b-4bd1-af08-1ddc88e0ac9d-operator-scripts\") pod \"124644b5-886b-4bd1-af08-1ddc88e0ac9d\" (UID: \"124644b5-886b-4bd1-af08-1ddc88e0ac9d\") " Feb 16 13:56:40 crc kubenswrapper[4812]: I0216 13:56:40.088768 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tn4js\" (UniqueName: \"kubernetes.io/projected/124644b5-886b-4bd1-af08-1ddc88e0ac9d-kube-api-access-tn4js\") pod \"124644b5-886b-4bd1-af08-1ddc88e0ac9d\" (UID: \"124644b5-886b-4bd1-af08-1ddc88e0ac9d\") " Feb 16 13:56:40 crc kubenswrapper[4812]: I0216 13:56:40.090270 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4srq7\" (UniqueName: \"kubernetes.io/projected/78d112fe-cdc5-4d0e-8636-49878e3888d9-kube-api-access-4srq7\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:40 crc kubenswrapper[4812]: I0216 13:56:40.091555 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79sk6\" (UniqueName: \"kubernetes.io/projected/3c3e6add-a453-46a2-b3ef-4c92d6c2426a-kube-api-access-79sk6\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:40 crc kubenswrapper[4812]: I0216 13:56:40.091603 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78d112fe-cdc5-4d0e-8636-49878e3888d9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:40 crc kubenswrapper[4812]: I0216 13:56:40.116019 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/124644b5-886b-4bd1-af08-1ddc88e0ac9d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "124644b5-886b-4bd1-af08-1ddc88e0ac9d" (UID: "124644b5-886b-4bd1-af08-1ddc88e0ac9d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:56:40 crc kubenswrapper[4812]: I0216 13:56:40.126538 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/124644b5-886b-4bd1-af08-1ddc88e0ac9d-kube-api-access-tn4js" (OuterVolumeSpecName: "kube-api-access-tn4js") pod "124644b5-886b-4bd1-af08-1ddc88e0ac9d" (UID: "124644b5-886b-4bd1-af08-1ddc88e0ac9d"). InnerVolumeSpecName "kube-api-access-tn4js". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:56:40 crc kubenswrapper[4812]: I0216 13:56:40.332872 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/124644b5-886b-4bd1-af08-1ddc88e0ac9d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:40 crc kubenswrapper[4812]: I0216 13:56:40.332924 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tn4js\" (UniqueName: \"kubernetes.io/projected/124644b5-886b-4bd1-af08-1ddc88e0ac9d-kube-api-access-tn4js\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:40 crc kubenswrapper[4812]: E0216 13:56:40.576203 4812 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod124644b5_886b_4bd1_af08_1ddc88e0ac9d.slice\": RecentStats: unable to find data in memory cache]" Feb 16 13:56:40 crc kubenswrapper[4812]: I0216 13:56:40.774410 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1437-account-create-update-xp5qz" Feb 16 13:56:40 crc kubenswrapper[4812]: I0216 13:56:40.869679 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2196ced1-8ac4-4012-8791-b9487350bd38-operator-scripts\") pod \"2196ced1-8ac4-4012-8791-b9487350bd38\" (UID: \"2196ced1-8ac4-4012-8791-b9487350bd38\") " Feb 16 13:56:40 crc kubenswrapper[4812]: I0216 13:56:40.869837 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sk95j\" (UniqueName: \"kubernetes.io/projected/2196ced1-8ac4-4012-8791-b9487350bd38-kube-api-access-sk95j\") pod \"2196ced1-8ac4-4012-8791-b9487350bd38\" (UID: \"2196ced1-8ac4-4012-8791-b9487350bd38\") " Feb 16 13:56:40 crc kubenswrapper[4812]: I0216 13:56:40.871314 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2196ced1-8ac4-4012-8791-b9487350bd38-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2196ced1-8ac4-4012-8791-b9487350bd38" (UID: "2196ced1-8ac4-4012-8791-b9487350bd38"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:56:40 crc kubenswrapper[4812]: I0216 13:56:40.878767 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2196ced1-8ac4-4012-8791-b9487350bd38-kube-api-access-sk95j" (OuterVolumeSpecName: "kube-api-access-sk95j") pod "2196ced1-8ac4-4012-8791-b9487350bd38" (UID: "2196ced1-8ac4-4012-8791-b9487350bd38"). InnerVolumeSpecName "kube-api-access-sk95j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:56:40 crc kubenswrapper[4812]: I0216 13:56:40.972284 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2196ced1-8ac4-4012-8791-b9487350bd38-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:40 crc kubenswrapper[4812]: I0216 13:56:40.972327 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sk95j\" (UniqueName: \"kubernetes.io/projected/2196ced1-8ac4-4012-8791-b9487350bd38-kube-api-access-sk95j\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:41 crc kubenswrapper[4812]: I0216 13:56:41.063988 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1437-account-create-update-xp5qz" event={"ID":"2196ced1-8ac4-4012-8791-b9487350bd38","Type":"ContainerDied","Data":"fc78cc1725bdfd8439f38c7e2993a0c987ba017d47c3ea3823bae2d396477a27"} Feb 16 13:56:41 crc kubenswrapper[4812]: I0216 13:56:41.064049 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1437-account-create-update-xp5qz" Feb 16 13:56:41 crc kubenswrapper[4812]: I0216 13:56:41.064058 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc78cc1725bdfd8439f38c7e2993a0c987ba017d47c3ea3823bae2d396477a27" Feb 16 13:56:45 crc kubenswrapper[4812]: I0216 13:56:45.726853 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5lk8n"] Feb 16 13:56:45 crc kubenswrapper[4812]: E0216 13:56:45.727839 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c3e6add-a453-46a2-b3ef-4c92d6c2426a" containerName="mariadb-database-create" Feb 16 13:56:45 crc kubenswrapper[4812]: I0216 13:56:45.727854 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c3e6add-a453-46a2-b3ef-4c92d6c2426a" containerName="mariadb-database-create" Feb 16 13:56:45 crc kubenswrapper[4812]: E0216 13:56:45.727886 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1622167e-23ac-4689-8708-02bfe0050250" containerName="mariadb-database-create" Feb 16 13:56:45 crc kubenswrapper[4812]: I0216 13:56:45.727892 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1622167e-23ac-4689-8708-02bfe0050250" containerName="mariadb-database-create" Feb 16 13:56:45 crc kubenswrapper[4812]: E0216 13:56:45.727901 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99792a16-b3c8-4956-9f97-0c64ad3f97d3" containerName="mariadb-database-create" Feb 16 13:56:45 crc kubenswrapper[4812]: I0216 13:56:45.727908 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="99792a16-b3c8-4956-9f97-0c64ad3f97d3" containerName="mariadb-database-create" Feb 16 13:56:45 crc kubenswrapper[4812]: E0216 13:56:45.727918 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78d112fe-cdc5-4d0e-8636-49878e3888d9" containerName="mariadb-account-create-update" Feb 16 13:56:45 crc kubenswrapper[4812]: I0216 13:56:45.727924 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="78d112fe-cdc5-4d0e-8636-49878e3888d9" containerName="mariadb-account-create-update" Feb 16 13:56:45 crc kubenswrapper[4812]: E0216 13:56:45.727932 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2196ced1-8ac4-4012-8791-b9487350bd38" containerName="mariadb-account-create-update" Feb 16 13:56:45 crc kubenswrapper[4812]: I0216 13:56:45.727938 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="2196ced1-8ac4-4012-8791-b9487350bd38" containerName="mariadb-account-create-update" Feb 16 13:56:45 crc kubenswrapper[4812]: E0216 13:56:45.727961 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="124644b5-886b-4bd1-af08-1ddc88e0ac9d" containerName="mariadb-account-create-update" Feb 16 13:56:45 crc kubenswrapper[4812]: I0216 13:56:45.727967 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="124644b5-886b-4bd1-af08-1ddc88e0ac9d" containerName="mariadb-account-create-update" Feb 16 13:56:45 crc kubenswrapper[4812]: I0216 13:56:45.728188 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="2196ced1-8ac4-4012-8791-b9487350bd38" containerName="mariadb-account-create-update" Feb 16 13:56:45 crc kubenswrapper[4812]: I0216 13:56:45.728204 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="78d112fe-cdc5-4d0e-8636-49878e3888d9" containerName="mariadb-account-create-update" Feb 16 13:56:45 crc kubenswrapper[4812]: I0216 13:56:45.728220 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="124644b5-886b-4bd1-af08-1ddc88e0ac9d" containerName="mariadb-account-create-update" Feb 16 13:56:45 crc kubenswrapper[4812]: I0216 13:56:45.728234 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="99792a16-b3c8-4956-9f97-0c64ad3f97d3" containerName="mariadb-database-create" Feb 16 13:56:45 crc kubenswrapper[4812]: I0216 13:56:45.728244 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c3e6add-a453-46a2-b3ef-4c92d6c2426a" containerName="mariadb-database-create" Feb 16 13:56:45 crc kubenswrapper[4812]: I0216 13:56:45.728254 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1622167e-23ac-4689-8708-02bfe0050250" containerName="mariadb-database-create" Feb 16 13:56:45 crc kubenswrapper[4812]: I0216 13:56:45.729039 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-5lk8n" Feb 16 13:56:45 crc kubenswrapper[4812]: I0216 13:56:45.731761 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-6gsjz" Feb 16 13:56:45 crc kubenswrapper[4812]: I0216 13:56:45.731903 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 16 13:56:45 crc kubenswrapper[4812]: I0216 13:56:45.733793 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 16 13:56:45 crc kubenswrapper[4812]: I0216 13:56:45.736904 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5lk8n"] Feb 16 13:56:45 crc kubenswrapper[4812]: I0216 13:56:45.741691 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c-scripts\") pod \"nova-cell0-conductor-db-sync-5lk8n\" (UID: \"7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c\") " pod="openstack/nova-cell0-conductor-db-sync-5lk8n" Feb 16 13:56:45 crc kubenswrapper[4812]: I0216 13:56:45.741855 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-5lk8n\" (UID: \"7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c\") " pod="openstack/nova-cell0-conductor-db-sync-5lk8n" Feb 16 13:56:45 crc kubenswrapper[4812]: I0216 13:56:45.742126 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck7h2\" (UniqueName: \"kubernetes.io/projected/7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c-kube-api-access-ck7h2\") pod \"nova-cell0-conductor-db-sync-5lk8n\" (UID: \"7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c\") " pod="openstack/nova-cell0-conductor-db-sync-5lk8n" Feb 16 13:56:45 crc kubenswrapper[4812]: I0216 13:56:45.742344 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c-config-data\") pod \"nova-cell0-conductor-db-sync-5lk8n\" (UID: \"7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c\") " pod="openstack/nova-cell0-conductor-db-sync-5lk8n" Feb 16 13:56:45 crc kubenswrapper[4812]: I0216 13:56:45.843960 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c-config-data\") pod \"nova-cell0-conductor-db-sync-5lk8n\" (UID: \"7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c\") " pod="openstack/nova-cell0-conductor-db-sync-5lk8n" Feb 16 13:56:45 crc kubenswrapper[4812]: I0216 13:56:45.844333 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c-scripts\") pod \"nova-cell0-conductor-db-sync-5lk8n\" (UID: \"7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c\") " pod="openstack/nova-cell0-conductor-db-sync-5lk8n" Feb 16 13:56:45 crc kubenswrapper[4812]: I0216 13:56:45.844432 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-5lk8n\" (UID: \"7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c\") " pod="openstack/nova-cell0-conductor-db-sync-5lk8n" Feb 16 13:56:45 crc kubenswrapper[4812]: I0216 13:56:45.844543 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ck7h2\" (UniqueName: \"kubernetes.io/projected/7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c-kube-api-access-ck7h2\") pod \"nova-cell0-conductor-db-sync-5lk8n\" (UID: \"7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c\") " pod="openstack/nova-cell0-conductor-db-sync-5lk8n" Feb 16 13:56:45 crc kubenswrapper[4812]: I0216 13:56:45.851480 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c-scripts\") pod \"nova-cell0-conductor-db-sync-5lk8n\" (UID: \"7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c\") " pod="openstack/nova-cell0-conductor-db-sync-5lk8n" Feb 16 13:56:45 crc kubenswrapper[4812]: I0216 13:56:45.851761 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c-config-data\") pod \"nova-cell0-conductor-db-sync-5lk8n\" (UID: \"7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c\") " pod="openstack/nova-cell0-conductor-db-sync-5lk8n" Feb 16 13:56:45 crc kubenswrapper[4812]: I0216 13:56:45.857945 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-5lk8n\" (UID: \"7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c\") " pod="openstack/nova-cell0-conductor-db-sync-5lk8n" Feb 16 13:56:45 crc kubenswrapper[4812]: I0216 13:56:45.871143 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck7h2\" (UniqueName: \"kubernetes.io/projected/7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c-kube-api-access-ck7h2\") pod \"nova-cell0-conductor-db-sync-5lk8n\" (UID: \"7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c\") " pod="openstack/nova-cell0-conductor-db-sync-5lk8n" Feb 16 13:56:45 crc kubenswrapper[4812]: E0216 13:56:45.883467 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 13:56:46 crc kubenswrapper[4812]: I0216 13:56:46.051024 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-5lk8n" Feb 16 13:56:46 crc kubenswrapper[4812]: W0216 13:56:46.577255 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b9a1ea5_9cb2_4d3e_90fc_fb06c5e3304c.slice/crio-f5912620e60faf8287ee0f90788b631c51dafe51be4661092b42bbf4d9e8e017 WatchSource:0}: Error finding container f5912620e60faf8287ee0f90788b631c51dafe51be4661092b42bbf4d9e8e017: Status 404 returned error can't find the container with id f5912620e60faf8287ee0f90788b631c51dafe51be4661092b42bbf4d9e8e017 Feb 16 13:56:46 crc kubenswrapper[4812]: I0216 13:56:46.582340 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5lk8n"] Feb 16 13:56:47 crc kubenswrapper[4812]: I0216 13:56:47.131014 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-5lk8n" event={"ID":"7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c","Type":"ContainerStarted","Data":"f5912620e60faf8287ee0f90788b631c51dafe51be4661092b42bbf4d9e8e017"} Feb 16 13:56:47 crc kubenswrapper[4812]: I0216 13:56:47.989499 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.025594 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97982589-9f36-48d3-929e-d6f0d2b83a3b-run-httpd\") pod \"97982589-9f36-48d3-929e-d6f0d2b83a3b\" (UID: \"97982589-9f36-48d3-929e-d6f0d2b83a3b\") " Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.025762 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97982589-9f36-48d3-929e-d6f0d2b83a3b-combined-ca-bundle\") pod \"97982589-9f36-48d3-929e-d6f0d2b83a3b\" (UID: \"97982589-9f36-48d3-929e-d6f0d2b83a3b\") " Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.025819 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97982589-9f36-48d3-929e-d6f0d2b83a3b-scripts\") pod \"97982589-9f36-48d3-929e-d6f0d2b83a3b\" (UID: \"97982589-9f36-48d3-929e-d6f0d2b83a3b\") " Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.025884 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97982589-9f36-48d3-929e-d6f0d2b83a3b-config-data\") pod \"97982589-9f36-48d3-929e-d6f0d2b83a3b\" (UID: \"97982589-9f36-48d3-929e-d6f0d2b83a3b\") " Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.025914 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97982589-9f36-48d3-929e-d6f0d2b83a3b-log-httpd\") pod \"97982589-9f36-48d3-929e-d6f0d2b83a3b\" (UID: \"97982589-9f36-48d3-929e-d6f0d2b83a3b\") " Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.026053 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/97982589-9f36-48d3-929e-d6f0d2b83a3b-sg-core-conf-yaml\") pod \"97982589-9f36-48d3-929e-d6f0d2b83a3b\" (UID: \"97982589-9f36-48d3-929e-d6f0d2b83a3b\") " Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.026088 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbzpw\" (UniqueName: \"kubernetes.io/projected/97982589-9f36-48d3-929e-d6f0d2b83a3b-kube-api-access-jbzpw\") pod \"97982589-9f36-48d3-929e-d6f0d2b83a3b\" (UID: \"97982589-9f36-48d3-929e-d6f0d2b83a3b\") " Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.030945 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97982589-9f36-48d3-929e-d6f0d2b83a3b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "97982589-9f36-48d3-929e-d6f0d2b83a3b" (UID: "97982589-9f36-48d3-929e-d6f0d2b83a3b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.031453 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97982589-9f36-48d3-929e-d6f0d2b83a3b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "97982589-9f36-48d3-929e-d6f0d2b83a3b" (UID: "97982589-9f36-48d3-929e-d6f0d2b83a3b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.046054 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97982589-9f36-48d3-929e-d6f0d2b83a3b-scripts" (OuterVolumeSpecName: "scripts") pod "97982589-9f36-48d3-929e-d6f0d2b83a3b" (UID: "97982589-9f36-48d3-929e-d6f0d2b83a3b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.046178 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97982589-9f36-48d3-929e-d6f0d2b83a3b-kube-api-access-jbzpw" (OuterVolumeSpecName: "kube-api-access-jbzpw") pod "97982589-9f36-48d3-929e-d6f0d2b83a3b" (UID: "97982589-9f36-48d3-929e-d6f0d2b83a3b"). InnerVolumeSpecName "kube-api-access-jbzpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.076768 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97982589-9f36-48d3-929e-d6f0d2b83a3b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "97982589-9f36-48d3-929e-d6f0d2b83a3b" (UID: "97982589-9f36-48d3-929e-d6f0d2b83a3b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.129638 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97982589-9f36-48d3-929e-d6f0d2b83a3b-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.129683 4812 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97982589-9f36-48d3-929e-d6f0d2b83a3b-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.129693 4812 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/97982589-9f36-48d3-929e-d6f0d2b83a3b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.129708 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbzpw\" (UniqueName: \"kubernetes.io/projected/97982589-9f36-48d3-929e-d6f0d2b83a3b-kube-api-access-jbzpw\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.129717 4812 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97982589-9f36-48d3-929e-d6f0d2b83a3b-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.135310 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97982589-9f36-48d3-929e-d6f0d2b83a3b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97982589-9f36-48d3-929e-d6f0d2b83a3b" (UID: "97982589-9f36-48d3-929e-d6f0d2b83a3b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.148353 4812 generic.go:334] "Generic (PLEG): container finished" podID="97982589-9f36-48d3-929e-d6f0d2b83a3b" containerID="e033f6e74900f7c0d42acf046887ffc6b649134a2e39d95043c1ba9645f96c76" exitCode=137 Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.148423 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"97982589-9f36-48d3-929e-d6f0d2b83a3b","Type":"ContainerDied","Data":"e033f6e74900f7c0d42acf046887ffc6b649134a2e39d95043c1ba9645f96c76"} Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.148466 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.148490 4812 scope.go:117] "RemoveContainer" containerID="e033f6e74900f7c0d42acf046887ffc6b649134a2e39d95043c1ba9645f96c76" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.148472 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"97982589-9f36-48d3-929e-d6f0d2b83a3b","Type":"ContainerDied","Data":"7b80b5f65f40b757c55e276d6161b9385de1865b8c2a4728d286595a4482655a"} Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.158189 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97982589-9f36-48d3-929e-d6f0d2b83a3b-config-data" (OuterVolumeSpecName: "config-data") pod "97982589-9f36-48d3-929e-d6f0d2b83a3b" (UID: "97982589-9f36-48d3-929e-d6f0d2b83a3b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.174605 4812 scope.go:117] "RemoveContainer" containerID="8cf68110f58598903c08b5a1ffccfdf688bc4f7be44022c86c9feb388cc6a7b9" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.204537 4812 scope.go:117] "RemoveContainer" containerID="5ea5abde6d8dd7532d9530966638b13490d2d4e865f2f8065f792940399033f1" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.230159 4812 scope.go:117] "RemoveContainer" containerID="5b04b8705207f032c0b920d34c5c7af28f4a8e3af57a3ea22c2a52fbf1b98aff" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.231415 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97982589-9f36-48d3-929e-d6f0d2b83a3b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.231464 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97982589-9f36-48d3-929e-d6f0d2b83a3b-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.256345 4812 scope.go:117] "RemoveContainer" containerID="e033f6e74900f7c0d42acf046887ffc6b649134a2e39d95043c1ba9645f96c76" Feb 16 13:56:48 crc kubenswrapper[4812]: E0216 13:56:48.257256 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e033f6e74900f7c0d42acf046887ffc6b649134a2e39d95043c1ba9645f96c76\": container with ID starting with e033f6e74900f7c0d42acf046887ffc6b649134a2e39d95043c1ba9645f96c76 not found: ID does not exist" containerID="e033f6e74900f7c0d42acf046887ffc6b649134a2e39d95043c1ba9645f96c76" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.257290 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e033f6e74900f7c0d42acf046887ffc6b649134a2e39d95043c1ba9645f96c76"} err="failed to get container status \"e033f6e74900f7c0d42acf046887ffc6b649134a2e39d95043c1ba9645f96c76\": rpc error: code = NotFound desc = could not find container \"e033f6e74900f7c0d42acf046887ffc6b649134a2e39d95043c1ba9645f96c76\": container with ID starting with e033f6e74900f7c0d42acf046887ffc6b649134a2e39d95043c1ba9645f96c76 not found: ID does not exist" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.257314 4812 scope.go:117] "RemoveContainer" containerID="8cf68110f58598903c08b5a1ffccfdf688bc4f7be44022c86c9feb388cc6a7b9" Feb 16 13:56:48 crc kubenswrapper[4812]: E0216 13:56:48.257554 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cf68110f58598903c08b5a1ffccfdf688bc4f7be44022c86c9feb388cc6a7b9\": container with ID starting with 8cf68110f58598903c08b5a1ffccfdf688bc4f7be44022c86c9feb388cc6a7b9 not found: ID does not exist" containerID="8cf68110f58598903c08b5a1ffccfdf688bc4f7be44022c86c9feb388cc6a7b9" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.257572 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cf68110f58598903c08b5a1ffccfdf688bc4f7be44022c86c9feb388cc6a7b9"} err="failed to get container status \"8cf68110f58598903c08b5a1ffccfdf688bc4f7be44022c86c9feb388cc6a7b9\": rpc error: code = NotFound desc = could not find container \"8cf68110f58598903c08b5a1ffccfdf688bc4f7be44022c86c9feb388cc6a7b9\": container with ID starting with 8cf68110f58598903c08b5a1ffccfdf688bc4f7be44022c86c9feb388cc6a7b9 not found: ID does not exist" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.257585 4812 scope.go:117] "RemoveContainer" containerID="5ea5abde6d8dd7532d9530966638b13490d2d4e865f2f8065f792940399033f1" Feb 16 13:56:48 crc kubenswrapper[4812]: E0216 13:56:48.257799 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ea5abde6d8dd7532d9530966638b13490d2d4e865f2f8065f792940399033f1\": container with ID starting with 5ea5abde6d8dd7532d9530966638b13490d2d4e865f2f8065f792940399033f1 not found: ID does not exist" containerID="5ea5abde6d8dd7532d9530966638b13490d2d4e865f2f8065f792940399033f1" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.257820 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ea5abde6d8dd7532d9530966638b13490d2d4e865f2f8065f792940399033f1"} err="failed to get container status \"5ea5abde6d8dd7532d9530966638b13490d2d4e865f2f8065f792940399033f1\": rpc error: code = NotFound desc = could not find container \"5ea5abde6d8dd7532d9530966638b13490d2d4e865f2f8065f792940399033f1\": container with ID starting with 5ea5abde6d8dd7532d9530966638b13490d2d4e865f2f8065f792940399033f1 not found: ID does not exist" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.257836 4812 scope.go:117] "RemoveContainer" containerID="5b04b8705207f032c0b920d34c5c7af28f4a8e3af57a3ea22c2a52fbf1b98aff" Feb 16 13:56:48 crc kubenswrapper[4812]: E0216 13:56:48.258142 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b04b8705207f032c0b920d34c5c7af28f4a8e3af57a3ea22c2a52fbf1b98aff\": container with ID starting with 5b04b8705207f032c0b920d34c5c7af28f4a8e3af57a3ea22c2a52fbf1b98aff not found: ID does not exist" containerID="5b04b8705207f032c0b920d34c5c7af28f4a8e3af57a3ea22c2a52fbf1b98aff" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.258162 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b04b8705207f032c0b920d34c5c7af28f4a8e3af57a3ea22c2a52fbf1b98aff"} err="failed to get container status \"5b04b8705207f032c0b920d34c5c7af28f4a8e3af57a3ea22c2a52fbf1b98aff\": rpc error: code = NotFound desc = could not find container \"5b04b8705207f032c0b920d34c5c7af28f4a8e3af57a3ea22c2a52fbf1b98aff\": container with ID starting with 5b04b8705207f032c0b920d34c5c7af28f4a8e3af57a3ea22c2a52fbf1b98aff not found: ID does not exist" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.489304 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.509865 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.536691 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:56:48 crc kubenswrapper[4812]: E0216 13:56:48.537153 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97982589-9f36-48d3-929e-d6f0d2b83a3b" containerName="sg-core" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.537176 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="97982589-9f36-48d3-929e-d6f0d2b83a3b" containerName="sg-core" Feb 16 13:56:48 crc kubenswrapper[4812]: E0216 13:56:48.537203 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97982589-9f36-48d3-929e-d6f0d2b83a3b" containerName="ceilometer-central-agent" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.537210 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="97982589-9f36-48d3-929e-d6f0d2b83a3b" containerName="ceilometer-central-agent" Feb 16 13:56:48 crc kubenswrapper[4812]: E0216 13:56:48.537226 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97982589-9f36-48d3-929e-d6f0d2b83a3b" containerName="ceilometer-notification-agent" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.537232 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="97982589-9f36-48d3-929e-d6f0d2b83a3b" containerName="ceilometer-notification-agent" Feb 16 13:56:48 crc kubenswrapper[4812]: E0216 13:56:48.537249 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97982589-9f36-48d3-929e-d6f0d2b83a3b" containerName="proxy-httpd" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.537258 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="97982589-9f36-48d3-929e-d6f0d2b83a3b" containerName="proxy-httpd" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.537467 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="97982589-9f36-48d3-929e-d6f0d2b83a3b" containerName="ceilometer-central-agent" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.537483 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="97982589-9f36-48d3-929e-d6f0d2b83a3b" containerName="proxy-httpd" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.537494 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="97982589-9f36-48d3-929e-d6f0d2b83a3b" containerName="ceilometer-notification-agent" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.537510 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="97982589-9f36-48d3-929e-d6f0d2b83a3b" containerName="sg-core" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.543879 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.547038 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.547207 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.570293 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.638643 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwl7c\" (UniqueName: \"kubernetes.io/projected/6839b129-e10a-4127-9d4a-c250a27807b8-kube-api-access-qwl7c\") pod \"ceilometer-0\" (UID: \"6839b129-e10a-4127-9d4a-c250a27807b8\") " pod="openstack/ceilometer-0" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.638720 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6839b129-e10a-4127-9d4a-c250a27807b8-scripts\") pod \"ceilometer-0\" (UID: \"6839b129-e10a-4127-9d4a-c250a27807b8\") " pod="openstack/ceilometer-0" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.638784 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6839b129-e10a-4127-9d4a-c250a27807b8-run-httpd\") pod \"ceilometer-0\" (UID: \"6839b129-e10a-4127-9d4a-c250a27807b8\") " pod="openstack/ceilometer-0" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.638805 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6839b129-e10a-4127-9d4a-c250a27807b8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6839b129-e10a-4127-9d4a-c250a27807b8\") " pod="openstack/ceilometer-0" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.638829 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6839b129-e10a-4127-9d4a-c250a27807b8-log-httpd\") pod \"ceilometer-0\" (UID: \"6839b129-e10a-4127-9d4a-c250a27807b8\") " pod="openstack/ceilometer-0" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.638977 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6839b129-e10a-4127-9d4a-c250a27807b8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6839b129-e10a-4127-9d4a-c250a27807b8\") " pod="openstack/ceilometer-0" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.639026 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6839b129-e10a-4127-9d4a-c250a27807b8-config-data\") pod \"ceilometer-0\" (UID: \"6839b129-e10a-4127-9d4a-c250a27807b8\") " pod="openstack/ceilometer-0" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.741554 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6839b129-e10a-4127-9d4a-c250a27807b8-run-httpd\") pod \"ceilometer-0\" (UID: \"6839b129-e10a-4127-9d4a-c250a27807b8\") " pod="openstack/ceilometer-0" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.741757 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6839b129-e10a-4127-9d4a-c250a27807b8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6839b129-e10a-4127-9d4a-c250a27807b8\") " pod="openstack/ceilometer-0" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.741853 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6839b129-e10a-4127-9d4a-c250a27807b8-log-httpd\") pod \"ceilometer-0\" (UID: \"6839b129-e10a-4127-9d4a-c250a27807b8\") " pod="openstack/ceilometer-0" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.742122 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6839b129-e10a-4127-9d4a-c250a27807b8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6839b129-e10a-4127-9d4a-c250a27807b8\") " pod="openstack/ceilometer-0" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.742213 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6839b129-e10a-4127-9d4a-c250a27807b8-run-httpd\") pod \"ceilometer-0\" (UID: \"6839b129-e10a-4127-9d4a-c250a27807b8\") " pod="openstack/ceilometer-0" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.742364 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6839b129-e10a-4127-9d4a-c250a27807b8-log-httpd\") pod \"ceilometer-0\" (UID: \"6839b129-e10a-4127-9d4a-c250a27807b8\") " pod="openstack/ceilometer-0" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.742521 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6839b129-e10a-4127-9d4a-c250a27807b8-config-data\") pod \"ceilometer-0\" (UID: \"6839b129-e10a-4127-9d4a-c250a27807b8\") " pod="openstack/ceilometer-0" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.742722 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwl7c\" (UniqueName: \"kubernetes.io/projected/6839b129-e10a-4127-9d4a-c250a27807b8-kube-api-access-qwl7c\") pod \"ceilometer-0\" (UID: \"6839b129-e10a-4127-9d4a-c250a27807b8\") " pod="openstack/ceilometer-0" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.742872 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6839b129-e10a-4127-9d4a-c250a27807b8-scripts\") pod \"ceilometer-0\" (UID: \"6839b129-e10a-4127-9d4a-c250a27807b8\") " pod="openstack/ceilometer-0" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.747964 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6839b129-e10a-4127-9d4a-c250a27807b8-config-data\") pod \"ceilometer-0\" (UID: \"6839b129-e10a-4127-9d4a-c250a27807b8\") " pod="openstack/ceilometer-0" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.748463 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6839b129-e10a-4127-9d4a-c250a27807b8-scripts\") pod \"ceilometer-0\" (UID: \"6839b129-e10a-4127-9d4a-c250a27807b8\") " pod="openstack/ceilometer-0" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.749530 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6839b129-e10a-4127-9d4a-c250a27807b8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6839b129-e10a-4127-9d4a-c250a27807b8\") " pod="openstack/ceilometer-0" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.750409 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6839b129-e10a-4127-9d4a-c250a27807b8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6839b129-e10a-4127-9d4a-c250a27807b8\") " pod="openstack/ceilometer-0" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.762930 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwl7c\" (UniqueName: \"kubernetes.io/projected/6839b129-e10a-4127-9d4a-c250a27807b8-kube-api-access-qwl7c\") pod \"ceilometer-0\" (UID: \"6839b129-e10a-4127-9d4a-c250a27807b8\") " pod="openstack/ceilometer-0" Feb 16 13:56:48 crc kubenswrapper[4812]: I0216 13:56:48.879459 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:56:49 crc kubenswrapper[4812]: I0216 13:56:49.383008 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:56:49 crc kubenswrapper[4812]: W0216 13:56:49.396666 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6839b129_e10a_4127_9d4a_c250a27807b8.slice/crio-78a9c16af49e0080c9808a0865fb2673a3ba9ed0e93fef1f0ecd930754935c53 WatchSource:0}: Error finding container 78a9c16af49e0080c9808a0865fb2673a3ba9ed0e93fef1f0ecd930754935c53: Status 404 returned error can't find the container with id 78a9c16af49e0080c9808a0865fb2673a3ba9ed0e93fef1f0ecd930754935c53 Feb 16 13:56:49 crc kubenswrapper[4812]: I0216 13:56:49.893603 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97982589-9f36-48d3-929e-d6f0d2b83a3b" path="/var/lib/kubelet/pods/97982589-9f36-48d3-929e-d6f0d2b83a3b/volumes" Feb 16 13:56:50 crc kubenswrapper[4812]: I0216 13:56:50.200079 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6839b129-e10a-4127-9d4a-c250a27807b8","Type":"ContainerStarted","Data":"78a9c16af49e0080c9808a0865fb2673a3ba9ed0e93fef1f0ecd930754935c53"} Feb 16 13:56:50 crc kubenswrapper[4812]: I0216 13:56:50.515215 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:56:56 crc kubenswrapper[4812]: I0216 13:56:56.291062 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-5lk8n" event={"ID":"7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c","Type":"ContainerStarted","Data":"dd8a762aa4a7f6dcf51ecd0d2a09f6a31fcbeb7037cb9a6c477d3fc18f074a98"} Feb 16 13:56:56 crc kubenswrapper[4812]: I0216 13:56:56.297283 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6839b129-e10a-4127-9d4a-c250a27807b8","Type":"ContainerStarted","Data":"2a15a21a759fbcefa8c33ffc4cb0bd60055d926285bf4536086e8b996e7ea7fc"} Feb 16 13:56:56 crc kubenswrapper[4812]: I0216 13:56:56.317489 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-5lk8n" podStartSLOduration=2.68501463 podStartE2EDuration="11.317447917s" podCreationTimestamp="2026-02-16 13:56:45 +0000 UTC" firstStartedPulling="2026-02-16 13:56:46.585374236 +0000 UTC m=+1495.649704937" lastFinishedPulling="2026-02-16 13:56:55.217807503 +0000 UTC m=+1504.282138224" observedRunningTime="2026-02-16 13:56:56.309729292 +0000 UTC m=+1505.374059993" watchObservedRunningTime="2026-02-16 13:56:56.317447917 +0000 UTC m=+1505.381778618" Feb 16 13:56:57 crc kubenswrapper[4812]: I0216 13:56:57.313118 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6839b129-e10a-4127-9d4a-c250a27807b8","Type":"ContainerStarted","Data":"b53850b74879e99c2d6b08b65891028cb0c81e726444c695dad83ce6f380f5ed"} Feb 16 13:56:57 crc kubenswrapper[4812]: I0216 13:56:57.314055 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6839b129-e10a-4127-9d4a-c250a27807b8","Type":"ContainerStarted","Data":"c9f72ca761f670189b453471a6d4a1ff3ae863ee00a19ab6f1a56de76db57729"} Feb 16 13:56:57 crc kubenswrapper[4812]: E0216 13:56:57.979490 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 13:56:57 crc kubenswrapper[4812]: E0216 13:56:57.979565 4812 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 13:56:57 crc kubenswrapper[4812]: E0216 13:56:57.979716 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mq54s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-krnzs_openstack(a7d4eae6-781f-4675-a6c3-ee0f1589c735): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 13:56:57 crc kubenswrapper[4812]: E0216 13:56:57.981283 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 13:56:58 crc kubenswrapper[4812]: I0216 13:56:58.407567 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7kbl8"] Feb 16 13:56:58 crc kubenswrapper[4812]: I0216 13:56:58.410412 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7kbl8" Feb 16 13:56:58 crc kubenswrapper[4812]: I0216 13:56:58.422903 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7kbl8"] Feb 16 13:56:58 crc kubenswrapper[4812]: I0216 13:56:58.609257 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ce2ca0e-7e12-44aa-a876-3e47d60aed95-catalog-content\") pod \"redhat-operators-7kbl8\" (UID: \"7ce2ca0e-7e12-44aa-a876-3e47d60aed95\") " pod="openshift-marketplace/redhat-operators-7kbl8" Feb 16 13:56:58 crc kubenswrapper[4812]: I0216 13:56:58.609594 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ce2ca0e-7e12-44aa-a876-3e47d60aed95-utilities\") pod \"redhat-operators-7kbl8\" (UID: \"7ce2ca0e-7e12-44aa-a876-3e47d60aed95\") " pod="openshift-marketplace/redhat-operators-7kbl8" Feb 16 13:56:58 crc kubenswrapper[4812]: I0216 13:56:58.609900 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k59g\" (UniqueName: \"kubernetes.io/projected/7ce2ca0e-7e12-44aa-a876-3e47d60aed95-kube-api-access-5k59g\") pod \"redhat-operators-7kbl8\" (UID: \"7ce2ca0e-7e12-44aa-a876-3e47d60aed95\") " pod="openshift-marketplace/redhat-operators-7kbl8" Feb 16 13:56:58 crc kubenswrapper[4812]: I0216 13:56:58.711667 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ce2ca0e-7e12-44aa-a876-3e47d60aed95-utilities\") pod \"redhat-operators-7kbl8\" (UID: \"7ce2ca0e-7e12-44aa-a876-3e47d60aed95\") " pod="openshift-marketplace/redhat-operators-7kbl8" Feb 16 13:56:58 crc kubenswrapper[4812]: I0216 13:56:58.711820 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k59g\" (UniqueName: \"kubernetes.io/projected/7ce2ca0e-7e12-44aa-a876-3e47d60aed95-kube-api-access-5k59g\") pod \"redhat-operators-7kbl8\" (UID: \"7ce2ca0e-7e12-44aa-a876-3e47d60aed95\") " pod="openshift-marketplace/redhat-operators-7kbl8" Feb 16 13:56:58 crc kubenswrapper[4812]: I0216 13:56:58.711885 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ce2ca0e-7e12-44aa-a876-3e47d60aed95-catalog-content\") pod \"redhat-operators-7kbl8\" (UID: \"7ce2ca0e-7e12-44aa-a876-3e47d60aed95\") " pod="openshift-marketplace/redhat-operators-7kbl8" Feb 16 13:56:58 crc kubenswrapper[4812]: I0216 13:56:58.712355 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ce2ca0e-7e12-44aa-a876-3e47d60aed95-utilities\") pod \"redhat-operators-7kbl8\" (UID: \"7ce2ca0e-7e12-44aa-a876-3e47d60aed95\") " pod="openshift-marketplace/redhat-operators-7kbl8" Feb 16 13:56:58 crc kubenswrapper[4812]: I0216 13:56:58.712396 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ce2ca0e-7e12-44aa-a876-3e47d60aed95-catalog-content\") pod \"redhat-operators-7kbl8\" (UID: \"7ce2ca0e-7e12-44aa-a876-3e47d60aed95\") " pod="openshift-marketplace/redhat-operators-7kbl8" Feb 16 13:56:58 crc kubenswrapper[4812]: I0216 13:56:58.734309 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k59g\" (UniqueName: \"kubernetes.io/projected/7ce2ca0e-7e12-44aa-a876-3e47d60aed95-kube-api-access-5k59g\") pod \"redhat-operators-7kbl8\" (UID: \"7ce2ca0e-7e12-44aa-a876-3e47d60aed95\") " pod="openshift-marketplace/redhat-operators-7kbl8" Feb 16 13:56:58 crc kubenswrapper[4812]: I0216 13:56:58.748868 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7kbl8" Feb 16 13:56:59 crc kubenswrapper[4812]: I0216 13:56:59.331537 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7kbl8"] Feb 16 13:56:59 crc kubenswrapper[4812]: W0216 13:56:59.339922 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ce2ca0e_7e12_44aa_a876_3e47d60aed95.slice/crio-fdaaa1ad80563b4d419a0414f58027d6df28caf29d5dc06262219bad374cb068 WatchSource:0}: Error finding container fdaaa1ad80563b4d419a0414f58027d6df28caf29d5dc06262219bad374cb068: Status 404 returned error can't find the container with id fdaaa1ad80563b4d419a0414f58027d6df28caf29d5dc06262219bad374cb068 Feb 16 13:56:59 crc kubenswrapper[4812]: I0216 13:56:59.386797 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6839b129-e10a-4127-9d4a-c250a27807b8","Type":"ContainerStarted","Data":"24d2e1b5f16a9306dd74ef82dd34c37d96ccd365919d490ae3ba923edb69b398"} Feb 16 13:56:59 crc kubenswrapper[4812]: I0216 13:56:59.387008 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6839b129-e10a-4127-9d4a-c250a27807b8" containerName="ceilometer-central-agent" containerID="cri-o://2a15a21a759fbcefa8c33ffc4cb0bd60055d926285bf4536086e8b996e7ea7fc" gracePeriod=30 Feb 16 13:56:59 crc kubenswrapper[4812]: I0216 13:56:59.387307 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 13:56:59 crc kubenswrapper[4812]: I0216 13:56:59.387675 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6839b129-e10a-4127-9d4a-c250a27807b8" containerName="proxy-httpd" containerID="cri-o://24d2e1b5f16a9306dd74ef82dd34c37d96ccd365919d490ae3ba923edb69b398" gracePeriod=30 Feb 16 13:56:59 crc kubenswrapper[4812]: I0216 13:56:59.387751 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6839b129-e10a-4127-9d4a-c250a27807b8" containerName="sg-core" containerID="cri-o://c9f72ca761f670189b453471a6d4a1ff3ae863ee00a19ab6f1a56de76db57729" gracePeriod=30 Feb 16 13:56:59 crc kubenswrapper[4812]: I0216 13:56:59.387808 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6839b129-e10a-4127-9d4a-c250a27807b8" containerName="ceilometer-notification-agent" containerID="cri-o://b53850b74879e99c2d6b08b65891028cb0c81e726444c695dad83ce6f380f5ed" gracePeriod=30 Feb 16 13:56:59 crc kubenswrapper[4812]: I0216 13:56:59.391529 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7kbl8" event={"ID":"7ce2ca0e-7e12-44aa-a876-3e47d60aed95","Type":"ContainerStarted","Data":"fdaaa1ad80563b4d419a0414f58027d6df28caf29d5dc06262219bad374cb068"} Feb 16 13:56:59 crc kubenswrapper[4812]: I0216 13:56:59.428960 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.746942201 podStartE2EDuration="11.428931407s" podCreationTimestamp="2026-02-16 13:56:48 +0000 UTC" firstStartedPulling="2026-02-16 13:56:49.401670928 +0000 UTC m=+1498.466001629" lastFinishedPulling="2026-02-16 13:56:58.083660134 +0000 UTC m=+1507.147990835" observedRunningTime="2026-02-16 13:56:59.422374577 +0000 UTC m=+1508.486705278" watchObservedRunningTime="2026-02-16 13:56:59.428931407 +0000 UTC m=+1508.493262108" Feb 16 13:57:00 crc kubenswrapper[4812]: I0216 13:57:00.404766 4812 generic.go:334] "Generic (PLEG): container finished" podID="7ce2ca0e-7e12-44aa-a876-3e47d60aed95" containerID="c9b6f3567174cb36395d42ad4362525bd693f9f8b91c4047dc6bca0f8e7a0674" exitCode=0 Feb 16 13:57:00 crc kubenswrapper[4812]: I0216 13:57:00.404872 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7kbl8" event={"ID":"7ce2ca0e-7e12-44aa-a876-3e47d60aed95","Type":"ContainerDied","Data":"c9b6f3567174cb36395d42ad4362525bd693f9f8b91c4047dc6bca0f8e7a0674"} Feb 16 13:57:00 crc kubenswrapper[4812]: I0216 13:57:00.410220 4812 generic.go:334] "Generic (PLEG): container finished" podID="6839b129-e10a-4127-9d4a-c250a27807b8" containerID="24d2e1b5f16a9306dd74ef82dd34c37d96ccd365919d490ae3ba923edb69b398" exitCode=0 Feb 16 13:57:00 crc kubenswrapper[4812]: I0216 13:57:00.410259 4812 generic.go:334] "Generic (PLEG): container finished" podID="6839b129-e10a-4127-9d4a-c250a27807b8" containerID="c9f72ca761f670189b453471a6d4a1ff3ae863ee00a19ab6f1a56de76db57729" exitCode=2 Feb 16 13:57:00 crc kubenswrapper[4812]: I0216 13:57:00.410274 4812 generic.go:334] "Generic (PLEG): container finished" podID="6839b129-e10a-4127-9d4a-c250a27807b8" containerID="b53850b74879e99c2d6b08b65891028cb0c81e726444c695dad83ce6f380f5ed" exitCode=0 Feb 16 13:57:00 crc kubenswrapper[4812]: I0216 13:57:00.410300 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6839b129-e10a-4127-9d4a-c250a27807b8","Type":"ContainerDied","Data":"24d2e1b5f16a9306dd74ef82dd34c37d96ccd365919d490ae3ba923edb69b398"} Feb 16 13:57:00 crc kubenswrapper[4812]: I0216 13:57:00.410333 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6839b129-e10a-4127-9d4a-c250a27807b8","Type":"ContainerDied","Data":"c9f72ca761f670189b453471a6d4a1ff3ae863ee00a19ab6f1a56de76db57729"} Feb 16 13:57:00 crc kubenswrapper[4812]: I0216 13:57:00.410346 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6839b129-e10a-4127-9d4a-c250a27807b8","Type":"ContainerDied","Data":"b53850b74879e99c2d6b08b65891028cb0c81e726444c695dad83ce6f380f5ed"} Feb 16 13:57:01 crc kubenswrapper[4812]: I0216 13:57:01.442359 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7kbl8" event={"ID":"7ce2ca0e-7e12-44aa-a876-3e47d60aed95","Type":"ContainerStarted","Data":"e2c0b4d8f087b680a0337a6434e286d72a0cc71e0cb98f547c602345e96406df"} Feb 16 13:57:04 crc kubenswrapper[4812]: I0216 13:57:04.692461 4812 generic.go:334] "Generic (PLEG): container finished" podID="7ce2ca0e-7e12-44aa-a876-3e47d60aed95" containerID="e2c0b4d8f087b680a0337a6434e286d72a0cc71e0cb98f547c602345e96406df" exitCode=0 Feb 16 13:57:04 crc kubenswrapper[4812]: I0216 13:57:04.692537 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7kbl8" event={"ID":"7ce2ca0e-7e12-44aa-a876-3e47d60aed95","Type":"ContainerDied","Data":"e2c0b4d8f087b680a0337a6434e286d72a0cc71e0cb98f547c602345e96406df"} Feb 16 13:57:05 crc kubenswrapper[4812]: I0216 13:57:05.708063 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7kbl8" event={"ID":"7ce2ca0e-7e12-44aa-a876-3e47d60aed95","Type":"ContainerStarted","Data":"338a0e844c3f79e8bf59ef633cc5b2d21e265f6fd2c6096b9b8a28980b4ad25f"} Feb 16 13:57:05 crc kubenswrapper[4812]: I0216 13:57:05.736207 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7kbl8" podStartSLOduration=3.036155284 podStartE2EDuration="7.736176908s" podCreationTimestamp="2026-02-16 13:56:58 +0000 UTC" firstStartedPulling="2026-02-16 13:57:00.40780022 +0000 UTC m=+1509.472130921" lastFinishedPulling="2026-02-16 13:57:05.107821844 +0000 UTC m=+1514.172152545" observedRunningTime="2026-02-16 13:57:05.729121803 +0000 UTC m=+1514.793452524" watchObservedRunningTime="2026-02-16 13:57:05.736176908 +0000 UTC m=+1514.800507609" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.455408 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.476595 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6839b129-e10a-4127-9d4a-c250a27807b8-config-data\") pod \"6839b129-e10a-4127-9d4a-c250a27807b8\" (UID: \"6839b129-e10a-4127-9d4a-c250a27807b8\") " Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.476707 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6839b129-e10a-4127-9d4a-c250a27807b8-combined-ca-bundle\") pod \"6839b129-e10a-4127-9d4a-c250a27807b8\" (UID: \"6839b129-e10a-4127-9d4a-c250a27807b8\") " Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.476841 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6839b129-e10a-4127-9d4a-c250a27807b8-log-httpd\") pod \"6839b129-e10a-4127-9d4a-c250a27807b8\" (UID: \"6839b129-e10a-4127-9d4a-c250a27807b8\") " Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.476900 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwl7c\" (UniqueName: \"kubernetes.io/projected/6839b129-e10a-4127-9d4a-c250a27807b8-kube-api-access-qwl7c\") pod \"6839b129-e10a-4127-9d4a-c250a27807b8\" (UID: \"6839b129-e10a-4127-9d4a-c250a27807b8\") " Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.476920 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6839b129-e10a-4127-9d4a-c250a27807b8-scripts\") pod \"6839b129-e10a-4127-9d4a-c250a27807b8\" (UID: \"6839b129-e10a-4127-9d4a-c250a27807b8\") " Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.477021 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6839b129-e10a-4127-9d4a-c250a27807b8-run-httpd\") pod \"6839b129-e10a-4127-9d4a-c250a27807b8\" (UID: \"6839b129-e10a-4127-9d4a-c250a27807b8\") " Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.477067 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6839b129-e10a-4127-9d4a-c250a27807b8-sg-core-conf-yaml\") pod \"6839b129-e10a-4127-9d4a-c250a27807b8\" (UID: \"6839b129-e10a-4127-9d4a-c250a27807b8\") " Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.478318 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6839b129-e10a-4127-9d4a-c250a27807b8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6839b129-e10a-4127-9d4a-c250a27807b8" (UID: "6839b129-e10a-4127-9d4a-c250a27807b8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.478595 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6839b129-e10a-4127-9d4a-c250a27807b8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6839b129-e10a-4127-9d4a-c250a27807b8" (UID: "6839b129-e10a-4127-9d4a-c250a27807b8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.497037 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6839b129-e10a-4127-9d4a-c250a27807b8-kube-api-access-qwl7c" (OuterVolumeSpecName: "kube-api-access-qwl7c") pod "6839b129-e10a-4127-9d4a-c250a27807b8" (UID: "6839b129-e10a-4127-9d4a-c250a27807b8"). InnerVolumeSpecName "kube-api-access-qwl7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.497223 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6839b129-e10a-4127-9d4a-c250a27807b8-scripts" (OuterVolumeSpecName: "scripts") pod "6839b129-e10a-4127-9d4a-c250a27807b8" (UID: "6839b129-e10a-4127-9d4a-c250a27807b8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.530819 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6839b129-e10a-4127-9d4a-c250a27807b8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6839b129-e10a-4127-9d4a-c250a27807b8" (UID: "6839b129-e10a-4127-9d4a-c250a27807b8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.580220 4812 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6839b129-e10a-4127-9d4a-c250a27807b8-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.580707 4812 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6839b129-e10a-4127-9d4a-c250a27807b8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.580725 4812 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6839b129-e10a-4127-9d4a-c250a27807b8-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.580738 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwl7c\" (UniqueName: \"kubernetes.io/projected/6839b129-e10a-4127-9d4a-c250a27807b8-kube-api-access-qwl7c\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.580749 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6839b129-e10a-4127-9d4a-c250a27807b8-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.594323 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6839b129-e10a-4127-9d4a-c250a27807b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6839b129-e10a-4127-9d4a-c250a27807b8" (UID: "6839b129-e10a-4127-9d4a-c250a27807b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.637465 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6839b129-e10a-4127-9d4a-c250a27807b8-config-data" (OuterVolumeSpecName: "config-data") pod "6839b129-e10a-4127-9d4a-c250a27807b8" (UID: "6839b129-e10a-4127-9d4a-c250a27807b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.682767 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6839b129-e10a-4127-9d4a-c250a27807b8-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.682820 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6839b129-e10a-4127-9d4a-c250a27807b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.741833 4812 generic.go:334] "Generic (PLEG): container finished" podID="6839b129-e10a-4127-9d4a-c250a27807b8" containerID="2a15a21a759fbcefa8c33ffc4cb0bd60055d926285bf4536086e8b996e7ea7fc" exitCode=0 Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.741895 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6839b129-e10a-4127-9d4a-c250a27807b8","Type":"ContainerDied","Data":"2a15a21a759fbcefa8c33ffc4cb0bd60055d926285bf4536086e8b996e7ea7fc"} Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.741933 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6839b129-e10a-4127-9d4a-c250a27807b8","Type":"ContainerDied","Data":"78a9c16af49e0080c9808a0865fb2673a3ba9ed0e93fef1f0ecd930754935c53"} Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.741958 4812 scope.go:117] "RemoveContainer" containerID="24d2e1b5f16a9306dd74ef82dd34c37d96ccd365919d490ae3ba923edb69b398" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.742016 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.814022 4812 scope.go:117] "RemoveContainer" containerID="c9f72ca761f670189b453471a6d4a1ff3ae863ee00a19ab6f1a56de76db57729" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.814233 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.827402 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.834720 4812 scope.go:117] "RemoveContainer" containerID="b53850b74879e99c2d6b08b65891028cb0c81e726444c695dad83ce6f380f5ed" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.852516 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:57:07 crc kubenswrapper[4812]: E0216 13:57:07.853071 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6839b129-e10a-4127-9d4a-c250a27807b8" containerName="proxy-httpd" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.853121 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="6839b129-e10a-4127-9d4a-c250a27807b8" containerName="proxy-httpd" Feb 16 13:57:07 crc kubenswrapper[4812]: E0216 13:57:07.853141 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6839b129-e10a-4127-9d4a-c250a27807b8" containerName="sg-core" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.853151 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="6839b129-e10a-4127-9d4a-c250a27807b8" containerName="sg-core" Feb 16 13:57:07 crc kubenswrapper[4812]: E0216 13:57:07.853163 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6839b129-e10a-4127-9d4a-c250a27807b8" containerName="ceilometer-central-agent" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.853174 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="6839b129-e10a-4127-9d4a-c250a27807b8" containerName="ceilometer-central-agent" Feb 16 13:57:07 crc kubenswrapper[4812]: E0216 13:57:07.853185 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6839b129-e10a-4127-9d4a-c250a27807b8" containerName="ceilometer-notification-agent" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.853192 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="6839b129-e10a-4127-9d4a-c250a27807b8" containerName="ceilometer-notification-agent" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.853385 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="6839b129-e10a-4127-9d4a-c250a27807b8" containerName="proxy-httpd" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.853403 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="6839b129-e10a-4127-9d4a-c250a27807b8" containerName="sg-core" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.853412 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="6839b129-e10a-4127-9d4a-c250a27807b8" containerName="ceilometer-central-agent" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.853425 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="6839b129-e10a-4127-9d4a-c250a27807b8" containerName="ceilometer-notification-agent" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.856169 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.862533 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.862647 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.873857 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.886697 4812 scope.go:117] "RemoveContainer" containerID="2a15a21a759fbcefa8c33ffc4cb0bd60055d926285bf4536086e8b996e7ea7fc" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.890506 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2b86c67-a2d2-4146-a4f1-46bae3ff6975-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c2b86c67-a2d2-4146-a4f1-46bae3ff6975\") " pod="openstack/ceilometer-0" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.890649 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrlhg\" (UniqueName: \"kubernetes.io/projected/c2b86c67-a2d2-4146-a4f1-46bae3ff6975-kube-api-access-nrlhg\") pod \"ceilometer-0\" (UID: \"c2b86c67-a2d2-4146-a4f1-46bae3ff6975\") " pod="openstack/ceilometer-0" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.890856 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2b86c67-a2d2-4146-a4f1-46bae3ff6975-run-httpd\") pod \"ceilometer-0\" (UID: \"c2b86c67-a2d2-4146-a4f1-46bae3ff6975\") " pod="openstack/ceilometer-0" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.890963 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c2b86c67-a2d2-4146-a4f1-46bae3ff6975-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c2b86c67-a2d2-4146-a4f1-46bae3ff6975\") " pod="openstack/ceilometer-0" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.891014 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2b86c67-a2d2-4146-a4f1-46bae3ff6975-config-data\") pod \"ceilometer-0\" (UID: \"c2b86c67-a2d2-4146-a4f1-46bae3ff6975\") " pod="openstack/ceilometer-0" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.891047 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2b86c67-a2d2-4146-a4f1-46bae3ff6975-log-httpd\") pod \"ceilometer-0\" (UID: \"c2b86c67-a2d2-4146-a4f1-46bae3ff6975\") " pod="openstack/ceilometer-0" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.891095 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2b86c67-a2d2-4146-a4f1-46bae3ff6975-scripts\") pod \"ceilometer-0\" (UID: \"c2b86c67-a2d2-4146-a4f1-46bae3ff6975\") " pod="openstack/ceilometer-0" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.909900 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6839b129-e10a-4127-9d4a-c250a27807b8" path="/var/lib/kubelet/pods/6839b129-e10a-4127-9d4a-c250a27807b8/volumes" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.932024 4812 scope.go:117] "RemoveContainer" containerID="24d2e1b5f16a9306dd74ef82dd34c37d96ccd365919d490ae3ba923edb69b398" Feb 16 13:57:07 crc kubenswrapper[4812]: E0216 13:57:07.935270 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24d2e1b5f16a9306dd74ef82dd34c37d96ccd365919d490ae3ba923edb69b398\": container with ID starting with 24d2e1b5f16a9306dd74ef82dd34c37d96ccd365919d490ae3ba923edb69b398 not found: ID does not exist" containerID="24d2e1b5f16a9306dd74ef82dd34c37d96ccd365919d490ae3ba923edb69b398" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.935404 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24d2e1b5f16a9306dd74ef82dd34c37d96ccd365919d490ae3ba923edb69b398"} err="failed to get container status \"24d2e1b5f16a9306dd74ef82dd34c37d96ccd365919d490ae3ba923edb69b398\": rpc error: code = NotFound desc = could not find container \"24d2e1b5f16a9306dd74ef82dd34c37d96ccd365919d490ae3ba923edb69b398\": container with ID starting with 24d2e1b5f16a9306dd74ef82dd34c37d96ccd365919d490ae3ba923edb69b398 not found: ID does not exist" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.935526 4812 scope.go:117] "RemoveContainer" containerID="c9f72ca761f670189b453471a6d4a1ff3ae863ee00a19ab6f1a56de76db57729" Feb 16 13:57:07 crc kubenswrapper[4812]: E0216 13:57:07.936031 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9f72ca761f670189b453471a6d4a1ff3ae863ee00a19ab6f1a56de76db57729\": container with ID starting with c9f72ca761f670189b453471a6d4a1ff3ae863ee00a19ab6f1a56de76db57729 not found: ID does not exist" containerID="c9f72ca761f670189b453471a6d4a1ff3ae863ee00a19ab6f1a56de76db57729" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.936120 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9f72ca761f670189b453471a6d4a1ff3ae863ee00a19ab6f1a56de76db57729"} err="failed to get container status \"c9f72ca761f670189b453471a6d4a1ff3ae863ee00a19ab6f1a56de76db57729\": rpc error: code = NotFound desc = could not find container \"c9f72ca761f670189b453471a6d4a1ff3ae863ee00a19ab6f1a56de76db57729\": container with ID starting with c9f72ca761f670189b453471a6d4a1ff3ae863ee00a19ab6f1a56de76db57729 not found: ID does not exist" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.936188 4812 scope.go:117] "RemoveContainer" containerID="b53850b74879e99c2d6b08b65891028cb0c81e726444c695dad83ce6f380f5ed" Feb 16 13:57:07 crc kubenswrapper[4812]: E0216 13:57:07.937389 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b53850b74879e99c2d6b08b65891028cb0c81e726444c695dad83ce6f380f5ed\": container with ID starting with b53850b74879e99c2d6b08b65891028cb0c81e726444c695dad83ce6f380f5ed not found: ID does not exist" containerID="b53850b74879e99c2d6b08b65891028cb0c81e726444c695dad83ce6f380f5ed" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.937523 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b53850b74879e99c2d6b08b65891028cb0c81e726444c695dad83ce6f380f5ed"} err="failed to get container status \"b53850b74879e99c2d6b08b65891028cb0c81e726444c695dad83ce6f380f5ed\": rpc error: code = NotFound desc = could not find container \"b53850b74879e99c2d6b08b65891028cb0c81e726444c695dad83ce6f380f5ed\": container with ID starting with b53850b74879e99c2d6b08b65891028cb0c81e726444c695dad83ce6f380f5ed not found: ID does not exist" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.937605 4812 scope.go:117] "RemoveContainer" containerID="2a15a21a759fbcefa8c33ffc4cb0bd60055d926285bf4536086e8b996e7ea7fc" Feb 16 13:57:07 crc kubenswrapper[4812]: E0216 13:57:07.937935 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a15a21a759fbcefa8c33ffc4cb0bd60055d926285bf4536086e8b996e7ea7fc\": container with ID starting with 2a15a21a759fbcefa8c33ffc4cb0bd60055d926285bf4536086e8b996e7ea7fc not found: ID does not exist" containerID="2a15a21a759fbcefa8c33ffc4cb0bd60055d926285bf4536086e8b996e7ea7fc" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.938021 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a15a21a759fbcefa8c33ffc4cb0bd60055d926285bf4536086e8b996e7ea7fc"} err="failed to get container status \"2a15a21a759fbcefa8c33ffc4cb0bd60055d926285bf4536086e8b996e7ea7fc\": rpc error: code = NotFound desc = could not find container \"2a15a21a759fbcefa8c33ffc4cb0bd60055d926285bf4536086e8b996e7ea7fc\": container with ID starting with 2a15a21a759fbcefa8c33ffc4cb0bd60055d926285bf4536086e8b996e7ea7fc not found: ID does not exist" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.993299 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2b86c67-a2d2-4146-a4f1-46bae3ff6975-run-httpd\") pod \"ceilometer-0\" (UID: \"c2b86c67-a2d2-4146-a4f1-46bae3ff6975\") " pod="openstack/ceilometer-0" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.993418 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c2b86c67-a2d2-4146-a4f1-46bae3ff6975-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c2b86c67-a2d2-4146-a4f1-46bae3ff6975\") " pod="openstack/ceilometer-0" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.993469 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2b86c67-a2d2-4146-a4f1-46bae3ff6975-config-data\") pod \"ceilometer-0\" (UID: \"c2b86c67-a2d2-4146-a4f1-46bae3ff6975\") " pod="openstack/ceilometer-0" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.993501 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2b86c67-a2d2-4146-a4f1-46bae3ff6975-log-httpd\") pod \"ceilometer-0\" (UID: \"c2b86c67-a2d2-4146-a4f1-46bae3ff6975\") " pod="openstack/ceilometer-0" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.993527 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2b86c67-a2d2-4146-a4f1-46bae3ff6975-scripts\") pod \"ceilometer-0\" (UID: \"c2b86c67-a2d2-4146-a4f1-46bae3ff6975\") " pod="openstack/ceilometer-0" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.993594 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2b86c67-a2d2-4146-a4f1-46bae3ff6975-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c2b86c67-a2d2-4146-a4f1-46bae3ff6975\") " pod="openstack/ceilometer-0" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.993631 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrlhg\" (UniqueName: \"kubernetes.io/projected/c2b86c67-a2d2-4146-a4f1-46bae3ff6975-kube-api-access-nrlhg\") pod \"ceilometer-0\" (UID: \"c2b86c67-a2d2-4146-a4f1-46bae3ff6975\") " pod="openstack/ceilometer-0" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.995466 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2b86c67-a2d2-4146-a4f1-46bae3ff6975-run-httpd\") pod \"ceilometer-0\" (UID: \"c2b86c67-a2d2-4146-a4f1-46bae3ff6975\") " pod="openstack/ceilometer-0" Feb 16 13:57:07 crc kubenswrapper[4812]: I0216 13:57:07.996465 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2b86c67-a2d2-4146-a4f1-46bae3ff6975-log-httpd\") pod \"ceilometer-0\" (UID: \"c2b86c67-a2d2-4146-a4f1-46bae3ff6975\") " pod="openstack/ceilometer-0" Feb 16 13:57:08 crc kubenswrapper[4812]: I0216 13:57:08.002352 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2b86c67-a2d2-4146-a4f1-46bae3ff6975-config-data\") pod \"ceilometer-0\" (UID: \"c2b86c67-a2d2-4146-a4f1-46bae3ff6975\") " pod="openstack/ceilometer-0" Feb 16 13:57:08 crc kubenswrapper[4812]: I0216 13:57:08.003105 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c2b86c67-a2d2-4146-a4f1-46bae3ff6975-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c2b86c67-a2d2-4146-a4f1-46bae3ff6975\") " pod="openstack/ceilometer-0" Feb 16 13:57:08 crc kubenswrapper[4812]: I0216 13:57:08.004057 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2b86c67-a2d2-4146-a4f1-46bae3ff6975-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c2b86c67-a2d2-4146-a4f1-46bae3ff6975\") " pod="openstack/ceilometer-0" Feb 16 13:57:08 crc kubenswrapper[4812]: I0216 13:57:08.004247 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2b86c67-a2d2-4146-a4f1-46bae3ff6975-scripts\") pod \"ceilometer-0\" (UID: \"c2b86c67-a2d2-4146-a4f1-46bae3ff6975\") " pod="openstack/ceilometer-0" Feb 16 13:57:08 crc kubenswrapper[4812]: I0216 13:57:08.010308 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrlhg\" (UniqueName: \"kubernetes.io/projected/c2b86c67-a2d2-4146-a4f1-46bae3ff6975-kube-api-access-nrlhg\") pod \"ceilometer-0\" (UID: \"c2b86c67-a2d2-4146-a4f1-46bae3ff6975\") " pod="openstack/ceilometer-0" Feb 16 13:57:08 crc kubenswrapper[4812]: I0216 13:57:08.188425 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:57:08 crc kubenswrapper[4812]: I0216 13:57:08.682366 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:57:08 crc kubenswrapper[4812]: W0216 13:57:08.686817 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2b86c67_a2d2_4146_a4f1_46bae3ff6975.slice/crio-8d97c67a49bb088a1118b897b166406892a5a368bbf49558de951f4c7f4b5e06 WatchSource:0}: Error finding container 8d97c67a49bb088a1118b897b166406892a5a368bbf49558de951f4c7f4b5e06: Status 404 returned error can't find the container with id 8d97c67a49bb088a1118b897b166406892a5a368bbf49558de951f4c7f4b5e06 Feb 16 13:57:08 crc kubenswrapper[4812]: I0216 13:57:08.749071 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7kbl8" Feb 16 13:57:08 crc kubenswrapper[4812]: I0216 13:57:08.752231 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7kbl8" Feb 16 13:57:08 crc kubenswrapper[4812]: I0216 13:57:08.754587 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2b86c67-a2d2-4146-a4f1-46bae3ff6975","Type":"ContainerStarted","Data":"8d97c67a49bb088a1118b897b166406892a5a368bbf49558de951f4c7f4b5e06"} Feb 16 13:57:09 crc kubenswrapper[4812]: I0216 13:57:09.769432 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2b86c67-a2d2-4146-a4f1-46bae3ff6975","Type":"ContainerStarted","Data":"984afce9e09b6642d6997623b59afefd1e45a0c4b5257dd3859afd38a198b65e"} Feb 16 13:57:09 crc kubenswrapper[4812]: I0216 13:57:09.802920 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7kbl8" podUID="7ce2ca0e-7e12-44aa-a876-3e47d60aed95" containerName="registry-server" probeResult="failure" output=< Feb 16 13:57:09 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Feb 16 13:57:09 crc kubenswrapper[4812]: > Feb 16 13:57:09 crc kubenswrapper[4812]: E0216 13:57:09.982493 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 13:57:10 crc kubenswrapper[4812]: I0216 13:57:10.781266 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2b86c67-a2d2-4146-a4f1-46bae3ff6975","Type":"ContainerStarted","Data":"d98c683017ff6532d04cccdd3384d2febb5ae3f24620c374a858f107cb14ba52"} Feb 16 13:57:11 crc kubenswrapper[4812]: I0216 13:57:11.793109 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2b86c67-a2d2-4146-a4f1-46bae3ff6975","Type":"ContainerStarted","Data":"b73221b6add0c9f2b9a6a2a5d469014eb65efb75559fc8afab9963f99a8f672e"} Feb 16 13:57:11 crc kubenswrapper[4812]: I0216 13:57:11.795018 4812 generic.go:334] "Generic (PLEG): container finished" podID="7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c" containerID="dd8a762aa4a7f6dcf51ecd0d2a09f6a31fcbeb7037cb9a6c477d3fc18f074a98" exitCode=0 Feb 16 13:57:11 crc kubenswrapper[4812]: I0216 13:57:11.795074 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-5lk8n" event={"ID":"7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c","Type":"ContainerDied","Data":"dd8a762aa4a7f6dcf51ecd0d2a09f6a31fcbeb7037cb9a6c477d3fc18f074a98"} Feb 16 13:57:12 crc kubenswrapper[4812]: I0216 13:57:12.845845 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2b86c67-a2d2-4146-a4f1-46bae3ff6975","Type":"ContainerStarted","Data":"2973c3ca880517f2f97ba2c9aec5581db6a1306d56a6f37e649e47908ca17f46"} Feb 16 13:57:12 crc kubenswrapper[4812]: I0216 13:57:12.846290 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 13:57:13 crc kubenswrapper[4812]: I0216 13:57:13.324376 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-5lk8n" Feb 16 13:57:13 crc kubenswrapper[4812]: I0216 13:57:13.350909 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.705013849 podStartE2EDuration="6.350884303s" podCreationTimestamp="2026-02-16 13:57:07 +0000 UTC" firstStartedPulling="2026-02-16 13:57:08.689622445 +0000 UTC m=+1517.753953146" lastFinishedPulling="2026-02-16 13:57:12.335492909 +0000 UTC m=+1521.399823600" observedRunningTime="2026-02-16 13:57:12.881086768 +0000 UTC m=+1521.945417489" watchObservedRunningTime="2026-02-16 13:57:13.350884303 +0000 UTC m=+1522.415215004" Feb 16 13:57:13 crc kubenswrapper[4812]: I0216 13:57:13.459957 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c-config-data\") pod \"7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c\" (UID: \"7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c\") " Feb 16 13:57:13 crc kubenswrapper[4812]: I0216 13:57:13.460128 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ck7h2\" (UniqueName: \"kubernetes.io/projected/7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c-kube-api-access-ck7h2\") pod \"7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c\" (UID: \"7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c\") " Feb 16 13:57:13 crc kubenswrapper[4812]: I0216 13:57:13.460217 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c-scripts\") pod \"7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c\" (UID: \"7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c\") " Feb 16 13:57:13 crc kubenswrapper[4812]: I0216 13:57:13.460258 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c-combined-ca-bundle\") pod \"7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c\" (UID: \"7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c\") " Feb 16 13:57:13 crc kubenswrapper[4812]: I0216 13:57:13.466726 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c-scripts" (OuterVolumeSpecName: "scripts") pod "7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c" (UID: "7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:57:13 crc kubenswrapper[4812]: I0216 13:57:13.466827 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c-kube-api-access-ck7h2" (OuterVolumeSpecName: "kube-api-access-ck7h2") pod "7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c" (UID: "7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c"). InnerVolumeSpecName "kube-api-access-ck7h2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:57:13 crc kubenswrapper[4812]: I0216 13:57:13.489026 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c-config-data" (OuterVolumeSpecName: "config-data") pod "7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c" (UID: "7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:57:13 crc kubenswrapper[4812]: I0216 13:57:13.491059 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c" (UID: "7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:57:13 crc kubenswrapper[4812]: I0216 13:57:13.564859 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ck7h2\" (UniqueName: \"kubernetes.io/projected/7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c-kube-api-access-ck7h2\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:13 crc kubenswrapper[4812]: I0216 13:57:13.565192 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:13 crc kubenswrapper[4812]: I0216 13:57:13.565204 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:13 crc kubenswrapper[4812]: I0216 13:57:13.565212 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:13 crc kubenswrapper[4812]: I0216 13:57:13.864694 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-5lk8n" event={"ID":"7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c","Type":"ContainerDied","Data":"f5912620e60faf8287ee0f90788b631c51dafe51be4661092b42bbf4d9e8e017"} Feb 16 13:57:13 crc kubenswrapper[4812]: I0216 13:57:13.864789 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5912620e60faf8287ee0f90788b631c51dafe51be4661092b42bbf4d9e8e017" Feb 16 13:57:13 crc kubenswrapper[4812]: I0216 13:57:13.864804 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-5lk8n" Feb 16 13:57:14 crc kubenswrapper[4812]: I0216 13:57:14.044564 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 13:57:14 crc kubenswrapper[4812]: E0216 13:57:14.045414 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c" containerName="nova-cell0-conductor-db-sync" Feb 16 13:57:14 crc kubenswrapper[4812]: I0216 13:57:14.045549 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c" containerName="nova-cell0-conductor-db-sync" Feb 16 13:57:14 crc kubenswrapper[4812]: I0216 13:57:14.045973 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c" containerName="nova-cell0-conductor-db-sync" Feb 16 13:57:14 crc kubenswrapper[4812]: I0216 13:57:14.047203 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 13:57:14 crc kubenswrapper[4812]: I0216 13:57:14.053693 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-6gsjz" Feb 16 13:57:14 crc kubenswrapper[4812]: I0216 13:57:14.054381 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 16 13:57:14 crc kubenswrapper[4812]: I0216 13:57:14.058089 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 13:57:14 crc kubenswrapper[4812]: I0216 13:57:14.077960 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f64dd2c6-0b02-4ed2-afd2-04d93a9c7d68-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f64dd2c6-0b02-4ed2-afd2-04d93a9c7d68\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:57:14 crc kubenswrapper[4812]: I0216 13:57:14.080621 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d277\" (UniqueName: \"kubernetes.io/projected/f64dd2c6-0b02-4ed2-afd2-04d93a9c7d68-kube-api-access-2d277\") pod \"nova-cell0-conductor-0\" (UID: \"f64dd2c6-0b02-4ed2-afd2-04d93a9c7d68\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:57:14 crc kubenswrapper[4812]: I0216 13:57:14.081034 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f64dd2c6-0b02-4ed2-afd2-04d93a9c7d68-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f64dd2c6-0b02-4ed2-afd2-04d93a9c7d68\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:57:14 crc kubenswrapper[4812]: I0216 13:57:14.183082 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d277\" (UniqueName: \"kubernetes.io/projected/f64dd2c6-0b02-4ed2-afd2-04d93a9c7d68-kube-api-access-2d277\") pod \"nova-cell0-conductor-0\" (UID: \"f64dd2c6-0b02-4ed2-afd2-04d93a9c7d68\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:57:14 crc kubenswrapper[4812]: I0216 13:57:14.183140 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f64dd2c6-0b02-4ed2-afd2-04d93a9c7d68-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f64dd2c6-0b02-4ed2-afd2-04d93a9c7d68\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:57:14 crc kubenswrapper[4812]: I0216 13:57:14.183209 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f64dd2c6-0b02-4ed2-afd2-04d93a9c7d68-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f64dd2c6-0b02-4ed2-afd2-04d93a9c7d68\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:57:14 crc kubenswrapper[4812]: I0216 13:57:14.195301 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f64dd2c6-0b02-4ed2-afd2-04d93a9c7d68-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f64dd2c6-0b02-4ed2-afd2-04d93a9c7d68\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:57:14 crc kubenswrapper[4812]: I0216 13:57:14.207172 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f64dd2c6-0b02-4ed2-afd2-04d93a9c7d68-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f64dd2c6-0b02-4ed2-afd2-04d93a9c7d68\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:57:14 crc kubenswrapper[4812]: I0216 13:57:14.210849 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d277\" (UniqueName: \"kubernetes.io/projected/f64dd2c6-0b02-4ed2-afd2-04d93a9c7d68-kube-api-access-2d277\") pod \"nova-cell0-conductor-0\" (UID: \"f64dd2c6-0b02-4ed2-afd2-04d93a9c7d68\") " pod="openstack/nova-cell0-conductor-0" Feb 16 13:57:14 crc kubenswrapper[4812]: I0216 13:57:14.371119 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 13:57:14 crc kubenswrapper[4812]: I0216 13:57:14.549949 4812 patch_prober.go:28] interesting pod/machine-config-daemon-c6mn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:57:14 crc kubenswrapper[4812]: I0216 13:57:14.550057 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:57:14 crc kubenswrapper[4812]: I0216 13:57:14.900121 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 13:57:15 crc kubenswrapper[4812]: I0216 13:57:15.894491 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f64dd2c6-0b02-4ed2-afd2-04d93a9c7d68","Type":"ContainerStarted","Data":"5dc0d4b72a970acd17044408b7b38bc283afb2d6c9ac0e203cd717fad11955fc"} Feb 16 13:57:15 crc kubenswrapper[4812]: I0216 13:57:15.894850 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f64dd2c6-0b02-4ed2-afd2-04d93a9c7d68","Type":"ContainerStarted","Data":"743fd678d7fd508b339a16a64bd405a6061c903d85a4464d2c0736c09d110646"} Feb 16 13:57:15 crc kubenswrapper[4812]: I0216 13:57:15.894893 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 16 13:57:15 crc kubenswrapper[4812]: I0216 13:57:15.922067 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=1.922039628 podStartE2EDuration="1.922039628s" podCreationTimestamp="2026-02-16 13:57:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:57:15.915832058 +0000 UTC m=+1524.980162759" watchObservedRunningTime="2026-02-16 13:57:15.922039628 +0000 UTC m=+1524.986370319" Feb 16 13:57:18 crc kubenswrapper[4812]: I0216 13:57:18.796530 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7kbl8" Feb 16 13:57:18 crc kubenswrapper[4812]: I0216 13:57:18.853686 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7kbl8" Feb 16 13:57:19 crc kubenswrapper[4812]: I0216 13:57:19.039910 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7kbl8"] Feb 16 13:57:19 crc kubenswrapper[4812]: I0216 13:57:19.937023 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7kbl8" podUID="7ce2ca0e-7e12-44aa-a876-3e47d60aed95" containerName="registry-server" containerID="cri-o://338a0e844c3f79e8bf59ef633cc5b2d21e265f6fd2c6096b9b8a28980b4ad25f" gracePeriod=2 Feb 16 13:57:20 crc kubenswrapper[4812]: I0216 13:57:20.445766 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7kbl8" Feb 16 13:57:20 crc kubenswrapper[4812]: I0216 13:57:20.596131 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ce2ca0e-7e12-44aa-a876-3e47d60aed95-catalog-content\") pod \"7ce2ca0e-7e12-44aa-a876-3e47d60aed95\" (UID: \"7ce2ca0e-7e12-44aa-a876-3e47d60aed95\") " Feb 16 13:57:20 crc kubenswrapper[4812]: I0216 13:57:20.596274 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5k59g\" (UniqueName: \"kubernetes.io/projected/7ce2ca0e-7e12-44aa-a876-3e47d60aed95-kube-api-access-5k59g\") pod \"7ce2ca0e-7e12-44aa-a876-3e47d60aed95\" (UID: \"7ce2ca0e-7e12-44aa-a876-3e47d60aed95\") " Feb 16 13:57:20 crc kubenswrapper[4812]: I0216 13:57:20.596424 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ce2ca0e-7e12-44aa-a876-3e47d60aed95-utilities\") pod \"7ce2ca0e-7e12-44aa-a876-3e47d60aed95\" (UID: \"7ce2ca0e-7e12-44aa-a876-3e47d60aed95\") " Feb 16 13:57:20 crc kubenswrapper[4812]: I0216 13:57:20.597283 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ce2ca0e-7e12-44aa-a876-3e47d60aed95-utilities" (OuterVolumeSpecName: "utilities") pod "7ce2ca0e-7e12-44aa-a876-3e47d60aed95" (UID: "7ce2ca0e-7e12-44aa-a876-3e47d60aed95"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:57:20 crc kubenswrapper[4812]: I0216 13:57:20.604820 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ce2ca0e-7e12-44aa-a876-3e47d60aed95-kube-api-access-5k59g" (OuterVolumeSpecName: "kube-api-access-5k59g") pod "7ce2ca0e-7e12-44aa-a876-3e47d60aed95" (UID: "7ce2ca0e-7e12-44aa-a876-3e47d60aed95"). InnerVolumeSpecName "kube-api-access-5k59g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:57:20 crc kubenswrapper[4812]: I0216 13:57:20.699223 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5k59g\" (UniqueName: \"kubernetes.io/projected/7ce2ca0e-7e12-44aa-a876-3e47d60aed95-kube-api-access-5k59g\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:20 crc kubenswrapper[4812]: I0216 13:57:20.699630 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ce2ca0e-7e12-44aa-a876-3e47d60aed95-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:20 crc kubenswrapper[4812]: I0216 13:57:20.723798 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ce2ca0e-7e12-44aa-a876-3e47d60aed95-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7ce2ca0e-7e12-44aa-a876-3e47d60aed95" (UID: "7ce2ca0e-7e12-44aa-a876-3e47d60aed95"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:57:20 crc kubenswrapper[4812]: I0216 13:57:20.801782 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ce2ca0e-7e12-44aa-a876-3e47d60aed95-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:20 crc kubenswrapper[4812]: I0216 13:57:20.959880 4812 generic.go:334] "Generic (PLEG): container finished" podID="7ce2ca0e-7e12-44aa-a876-3e47d60aed95" containerID="338a0e844c3f79e8bf59ef633cc5b2d21e265f6fd2c6096b9b8a28980b4ad25f" exitCode=0 Feb 16 13:57:20 crc kubenswrapper[4812]: I0216 13:57:20.960060 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7kbl8" Feb 16 13:57:20 crc kubenswrapper[4812]: I0216 13:57:20.960091 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7kbl8" event={"ID":"7ce2ca0e-7e12-44aa-a876-3e47d60aed95","Type":"ContainerDied","Data":"338a0e844c3f79e8bf59ef633cc5b2d21e265f6fd2c6096b9b8a28980b4ad25f"} Feb 16 13:57:20 crc kubenswrapper[4812]: I0216 13:57:20.960471 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7kbl8" event={"ID":"7ce2ca0e-7e12-44aa-a876-3e47d60aed95","Type":"ContainerDied","Data":"fdaaa1ad80563b4d419a0414f58027d6df28caf29d5dc06262219bad374cb068"} Feb 16 13:57:20 crc kubenswrapper[4812]: I0216 13:57:20.960515 4812 scope.go:117] "RemoveContainer" containerID="338a0e844c3f79e8bf59ef633cc5b2d21e265f6fd2c6096b9b8a28980b4ad25f" Feb 16 13:57:20 crc kubenswrapper[4812]: I0216 13:57:20.999646 4812 scope.go:117] "RemoveContainer" containerID="e2c0b4d8f087b680a0337a6434e286d72a0cc71e0cb98f547c602345e96406df" Feb 16 13:57:21 crc kubenswrapper[4812]: I0216 13:57:21.058699 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7kbl8"] Feb 16 13:57:21 crc kubenswrapper[4812]: I0216 13:57:21.080832 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7kbl8"] Feb 16 13:57:21 crc kubenswrapper[4812]: I0216 13:57:21.092837 4812 scope.go:117] "RemoveContainer" containerID="c9b6f3567174cb36395d42ad4362525bd693f9f8b91c4047dc6bca0f8e7a0674" Feb 16 13:57:21 crc kubenswrapper[4812]: I0216 13:57:21.121794 4812 scope.go:117] "RemoveContainer" containerID="338a0e844c3f79e8bf59ef633cc5b2d21e265f6fd2c6096b9b8a28980b4ad25f" Feb 16 13:57:21 crc kubenswrapper[4812]: E0216 13:57:21.122500 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"338a0e844c3f79e8bf59ef633cc5b2d21e265f6fd2c6096b9b8a28980b4ad25f\": container with ID starting with 338a0e844c3f79e8bf59ef633cc5b2d21e265f6fd2c6096b9b8a28980b4ad25f not found: ID does not exist" containerID="338a0e844c3f79e8bf59ef633cc5b2d21e265f6fd2c6096b9b8a28980b4ad25f" Feb 16 13:57:21 crc kubenswrapper[4812]: I0216 13:57:21.122569 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"338a0e844c3f79e8bf59ef633cc5b2d21e265f6fd2c6096b9b8a28980b4ad25f"} err="failed to get container status \"338a0e844c3f79e8bf59ef633cc5b2d21e265f6fd2c6096b9b8a28980b4ad25f\": rpc error: code = NotFound desc = could not find container \"338a0e844c3f79e8bf59ef633cc5b2d21e265f6fd2c6096b9b8a28980b4ad25f\": container with ID starting with 338a0e844c3f79e8bf59ef633cc5b2d21e265f6fd2c6096b9b8a28980b4ad25f not found: ID does not exist" Feb 16 13:57:21 crc kubenswrapper[4812]: I0216 13:57:21.122608 4812 scope.go:117] "RemoveContainer" containerID="e2c0b4d8f087b680a0337a6434e286d72a0cc71e0cb98f547c602345e96406df" Feb 16 13:57:21 crc kubenswrapper[4812]: E0216 13:57:21.123073 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2c0b4d8f087b680a0337a6434e286d72a0cc71e0cb98f547c602345e96406df\": container with ID starting with e2c0b4d8f087b680a0337a6434e286d72a0cc71e0cb98f547c602345e96406df not found: ID does not exist" containerID="e2c0b4d8f087b680a0337a6434e286d72a0cc71e0cb98f547c602345e96406df" Feb 16 13:57:21 crc kubenswrapper[4812]: I0216 13:57:21.123109 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2c0b4d8f087b680a0337a6434e286d72a0cc71e0cb98f547c602345e96406df"} err="failed to get container status \"e2c0b4d8f087b680a0337a6434e286d72a0cc71e0cb98f547c602345e96406df\": rpc error: code = NotFound desc = could not find container \"e2c0b4d8f087b680a0337a6434e286d72a0cc71e0cb98f547c602345e96406df\": container with ID starting with e2c0b4d8f087b680a0337a6434e286d72a0cc71e0cb98f547c602345e96406df not found: ID does not exist" Feb 16 13:57:21 crc kubenswrapper[4812]: I0216 13:57:21.123136 4812 scope.go:117] "RemoveContainer" containerID="c9b6f3567174cb36395d42ad4362525bd693f9f8b91c4047dc6bca0f8e7a0674" Feb 16 13:57:21 crc kubenswrapper[4812]: E0216 13:57:21.123422 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9b6f3567174cb36395d42ad4362525bd693f9f8b91c4047dc6bca0f8e7a0674\": container with ID starting with c9b6f3567174cb36395d42ad4362525bd693f9f8b91c4047dc6bca0f8e7a0674 not found: ID does not exist" containerID="c9b6f3567174cb36395d42ad4362525bd693f9f8b91c4047dc6bca0f8e7a0674" Feb 16 13:57:21 crc kubenswrapper[4812]: I0216 13:57:21.123488 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9b6f3567174cb36395d42ad4362525bd693f9f8b91c4047dc6bca0f8e7a0674"} err="failed to get container status \"c9b6f3567174cb36395d42ad4362525bd693f9f8b91c4047dc6bca0f8e7a0674\": rpc error: code = NotFound desc = could not find container \"c9b6f3567174cb36395d42ad4362525bd693f9f8b91c4047dc6bca0f8e7a0674\": container with ID starting with c9b6f3567174cb36395d42ad4362525bd693f9f8b91c4047dc6bca0f8e7a0674 not found: ID does not exist" Feb 16 13:57:21 crc kubenswrapper[4812]: I0216 13:57:21.893519 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ce2ca0e-7e12-44aa-a876-3e47d60aed95" path="/var/lib/kubelet/pods/7ce2ca0e-7e12-44aa-a876-3e47d60aed95/volumes" Feb 16 13:57:22 crc kubenswrapper[4812]: I0216 13:57:22.258531 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b5mlw"] Feb 16 13:57:22 crc kubenswrapper[4812]: E0216 13:57:22.259238 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce2ca0e-7e12-44aa-a876-3e47d60aed95" containerName="extract-utilities" Feb 16 13:57:22 crc kubenswrapper[4812]: I0216 13:57:22.259261 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce2ca0e-7e12-44aa-a876-3e47d60aed95" containerName="extract-utilities" Feb 16 13:57:22 crc kubenswrapper[4812]: E0216 13:57:22.259289 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce2ca0e-7e12-44aa-a876-3e47d60aed95" containerName="extract-content" Feb 16 13:57:22 crc kubenswrapper[4812]: I0216 13:57:22.259298 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce2ca0e-7e12-44aa-a876-3e47d60aed95" containerName="extract-content" Feb 16 13:57:22 crc kubenswrapper[4812]: E0216 13:57:22.259320 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce2ca0e-7e12-44aa-a876-3e47d60aed95" containerName="registry-server" Feb 16 13:57:22 crc kubenswrapper[4812]: I0216 13:57:22.259328 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce2ca0e-7e12-44aa-a876-3e47d60aed95" containerName="registry-server" Feb 16 13:57:22 crc kubenswrapper[4812]: I0216 13:57:22.259651 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ce2ca0e-7e12-44aa-a876-3e47d60aed95" containerName="registry-server" Feb 16 13:57:22 crc kubenswrapper[4812]: I0216 13:57:22.263020 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b5mlw" Feb 16 13:57:22 crc kubenswrapper[4812]: I0216 13:57:22.277340 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b5mlw"] Feb 16 13:57:22 crc kubenswrapper[4812]: I0216 13:57:22.385875 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb445434-4b77-4079-a14a-c21480d7bb4e-utilities\") pod \"community-operators-b5mlw\" (UID: \"fb445434-4b77-4079-a14a-c21480d7bb4e\") " pod="openshift-marketplace/community-operators-b5mlw" Feb 16 13:57:22 crc kubenswrapper[4812]: I0216 13:57:22.386462 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb445434-4b77-4079-a14a-c21480d7bb4e-catalog-content\") pod \"community-operators-b5mlw\" (UID: \"fb445434-4b77-4079-a14a-c21480d7bb4e\") " pod="openshift-marketplace/community-operators-b5mlw" Feb 16 13:57:22 crc kubenswrapper[4812]: I0216 13:57:22.386704 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srtxj\" (UniqueName: \"kubernetes.io/projected/fb445434-4b77-4079-a14a-c21480d7bb4e-kube-api-access-srtxj\") pod \"community-operators-b5mlw\" (UID: \"fb445434-4b77-4079-a14a-c21480d7bb4e\") " pod="openshift-marketplace/community-operators-b5mlw" Feb 16 13:57:22 crc kubenswrapper[4812]: I0216 13:57:22.488426 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb445434-4b77-4079-a14a-c21480d7bb4e-utilities\") pod \"community-operators-b5mlw\" (UID: \"fb445434-4b77-4079-a14a-c21480d7bb4e\") " pod="openshift-marketplace/community-operators-b5mlw" Feb 16 13:57:22 crc kubenswrapper[4812]: I0216 13:57:22.488561 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb445434-4b77-4079-a14a-c21480d7bb4e-catalog-content\") pod \"community-operators-b5mlw\" (UID: \"fb445434-4b77-4079-a14a-c21480d7bb4e\") " pod="openshift-marketplace/community-operators-b5mlw" Feb 16 13:57:22 crc kubenswrapper[4812]: I0216 13:57:22.488615 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srtxj\" (UniqueName: \"kubernetes.io/projected/fb445434-4b77-4079-a14a-c21480d7bb4e-kube-api-access-srtxj\") pod \"community-operators-b5mlw\" (UID: \"fb445434-4b77-4079-a14a-c21480d7bb4e\") " pod="openshift-marketplace/community-operators-b5mlw" Feb 16 13:57:22 crc kubenswrapper[4812]: I0216 13:57:22.489438 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb445434-4b77-4079-a14a-c21480d7bb4e-utilities\") pod \"community-operators-b5mlw\" (UID: \"fb445434-4b77-4079-a14a-c21480d7bb4e\") " pod="openshift-marketplace/community-operators-b5mlw" Feb 16 13:57:22 crc kubenswrapper[4812]: I0216 13:57:22.489702 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb445434-4b77-4079-a14a-c21480d7bb4e-catalog-content\") pod \"community-operators-b5mlw\" (UID: \"fb445434-4b77-4079-a14a-c21480d7bb4e\") " pod="openshift-marketplace/community-operators-b5mlw" Feb 16 13:57:22 crc kubenswrapper[4812]: I0216 13:57:22.510539 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srtxj\" (UniqueName: \"kubernetes.io/projected/fb445434-4b77-4079-a14a-c21480d7bb4e-kube-api-access-srtxj\") pod \"community-operators-b5mlw\" (UID: \"fb445434-4b77-4079-a14a-c21480d7bb4e\") " pod="openshift-marketplace/community-operators-b5mlw" Feb 16 13:57:22 crc kubenswrapper[4812]: I0216 13:57:22.596537 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b5mlw" Feb 16 13:57:23 crc kubenswrapper[4812]: W0216 13:57:23.172709 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb445434_4b77_4079_a14a_c21480d7bb4e.slice/crio-0c3ffa057e9d71c08c11ee71cc2969f1a9d97459b0461f4892bacb864367254a WatchSource:0}: Error finding container 0c3ffa057e9d71c08c11ee71cc2969f1a9d97459b0461f4892bacb864367254a: Status 404 returned error can't find the container with id 0c3ffa057e9d71c08c11ee71cc2969f1a9d97459b0461f4892bacb864367254a Feb 16 13:57:23 crc kubenswrapper[4812]: I0216 13:57:23.177116 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b5mlw"] Feb 16 13:57:23 crc kubenswrapper[4812]: E0216 13:57:23.882750 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 13:57:24 crc kubenswrapper[4812]: I0216 13:57:24.003435 4812 generic.go:334] "Generic (PLEG): container finished" podID="fb445434-4b77-4079-a14a-c21480d7bb4e" containerID="3889d92f57bfbe69edfe67b47157fb3e7f77bcb29d89c6d13aef27cb485c8dd3" exitCode=0 Feb 16 13:57:24 crc kubenswrapper[4812]: I0216 13:57:24.003500 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5mlw" event={"ID":"fb445434-4b77-4079-a14a-c21480d7bb4e","Type":"ContainerDied","Data":"3889d92f57bfbe69edfe67b47157fb3e7f77bcb29d89c6d13aef27cb485c8dd3"} Feb 16 13:57:24 crc kubenswrapper[4812]: I0216 13:57:24.003568 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5mlw" event={"ID":"fb445434-4b77-4079-a14a-c21480d7bb4e","Type":"ContainerStarted","Data":"0c3ffa057e9d71c08c11ee71cc2969f1a9d97459b0461f4892bacb864367254a"} Feb 16 13:57:24 crc kubenswrapper[4812]: I0216 13:57:24.403714 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.018216 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5mlw" event={"ID":"fb445434-4b77-4079-a14a-c21480d7bb4e","Type":"ContainerStarted","Data":"6e1e27fae621f2e94362e36ed877e3fe118c756e522d07bb23330406288055f5"} Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.052499 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-46bzm"] Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.054395 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-46bzm" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.057252 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.064891 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.073515 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-46bzm"] Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.172214 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5v7b\" (UniqueName: \"kubernetes.io/projected/9f356bcf-8719-4c4d-a9f8-b21489380dd8-kube-api-access-s5v7b\") pod \"nova-cell0-cell-mapping-46bzm\" (UID: \"9f356bcf-8719-4c4d-a9f8-b21489380dd8\") " pod="openstack/nova-cell0-cell-mapping-46bzm" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.172313 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f356bcf-8719-4c4d-a9f8-b21489380dd8-scripts\") pod \"nova-cell0-cell-mapping-46bzm\" (UID: \"9f356bcf-8719-4c4d-a9f8-b21489380dd8\") " pod="openstack/nova-cell0-cell-mapping-46bzm" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.172348 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f356bcf-8719-4c4d-a9f8-b21489380dd8-config-data\") pod \"nova-cell0-cell-mapping-46bzm\" (UID: \"9f356bcf-8719-4c4d-a9f8-b21489380dd8\") " pod="openstack/nova-cell0-cell-mapping-46bzm" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.172381 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f356bcf-8719-4c4d-a9f8-b21489380dd8-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-46bzm\" (UID: \"9f356bcf-8719-4c4d-a9f8-b21489380dd8\") " pod="openstack/nova-cell0-cell-mapping-46bzm" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.254419 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.264359 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.267706 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.276163 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5v7b\" (UniqueName: \"kubernetes.io/projected/9f356bcf-8719-4c4d-a9f8-b21489380dd8-kube-api-access-s5v7b\") pod \"nova-cell0-cell-mapping-46bzm\" (UID: \"9f356bcf-8719-4c4d-a9f8-b21489380dd8\") " pod="openstack/nova-cell0-cell-mapping-46bzm" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.276259 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f356bcf-8719-4c4d-a9f8-b21489380dd8-scripts\") pod \"nova-cell0-cell-mapping-46bzm\" (UID: \"9f356bcf-8719-4c4d-a9f8-b21489380dd8\") " pod="openstack/nova-cell0-cell-mapping-46bzm" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.276295 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f356bcf-8719-4c4d-a9f8-b21489380dd8-config-data\") pod \"nova-cell0-cell-mapping-46bzm\" (UID: \"9f356bcf-8719-4c4d-a9f8-b21489380dd8\") " pod="openstack/nova-cell0-cell-mapping-46bzm" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.276335 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f356bcf-8719-4c4d-a9f8-b21489380dd8-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-46bzm\" (UID: \"9f356bcf-8719-4c4d-a9f8-b21489380dd8\") " pod="openstack/nova-cell0-cell-mapping-46bzm" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.286147 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.292850 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f356bcf-8719-4c4d-a9f8-b21489380dd8-config-data\") pod \"nova-cell0-cell-mapping-46bzm\" (UID: \"9f356bcf-8719-4c4d-a9f8-b21489380dd8\") " pod="openstack/nova-cell0-cell-mapping-46bzm" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.301508 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f356bcf-8719-4c4d-a9f8-b21489380dd8-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-46bzm\" (UID: \"9f356bcf-8719-4c4d-a9f8-b21489380dd8\") " pod="openstack/nova-cell0-cell-mapping-46bzm" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.303253 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f356bcf-8719-4c4d-a9f8-b21489380dd8-scripts\") pod \"nova-cell0-cell-mapping-46bzm\" (UID: \"9f356bcf-8719-4c4d-a9f8-b21489380dd8\") " pod="openstack/nova-cell0-cell-mapping-46bzm" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.310043 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5v7b\" (UniqueName: \"kubernetes.io/projected/9f356bcf-8719-4c4d-a9f8-b21489380dd8-kube-api-access-s5v7b\") pod \"nova-cell0-cell-mapping-46bzm\" (UID: \"9f356bcf-8719-4c4d-a9f8-b21489380dd8\") " pod="openstack/nova-cell0-cell-mapping-46bzm" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.381856 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qslb2\" (UniqueName: \"kubernetes.io/projected/ebb386ce-7ac7-465f-952e-ba006a49411d-kube-api-access-qslb2\") pod \"nova-scheduler-0\" (UID: \"ebb386ce-7ac7-465f-952e-ba006a49411d\") " pod="openstack/nova-scheduler-0" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.381948 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebb386ce-7ac7-465f-952e-ba006a49411d-config-data\") pod \"nova-scheduler-0\" (UID: \"ebb386ce-7ac7-465f-952e-ba006a49411d\") " pod="openstack/nova-scheduler-0" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.381986 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebb386ce-7ac7-465f-952e-ba006a49411d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ebb386ce-7ac7-465f-952e-ba006a49411d\") " pod="openstack/nova-scheduler-0" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.386781 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-46bzm" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.397655 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.407381 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.412323 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.418914 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.486281 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qslb2\" (UniqueName: \"kubernetes.io/projected/ebb386ce-7ac7-465f-952e-ba006a49411d-kube-api-access-qslb2\") pod \"nova-scheduler-0\" (UID: \"ebb386ce-7ac7-465f-952e-ba006a49411d\") " pod="openstack/nova-scheduler-0" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.486603 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebb386ce-7ac7-465f-952e-ba006a49411d-config-data\") pod \"nova-scheduler-0\" (UID: \"ebb386ce-7ac7-465f-952e-ba006a49411d\") " pod="openstack/nova-scheduler-0" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.486741 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebb386ce-7ac7-465f-952e-ba006a49411d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ebb386ce-7ac7-465f-952e-ba006a49411d\") " pod="openstack/nova-scheduler-0" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.492323 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebb386ce-7ac7-465f-952e-ba006a49411d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ebb386ce-7ac7-465f-952e-ba006a49411d\") " pod="openstack/nova-scheduler-0" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.498325 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebb386ce-7ac7-465f-952e-ba006a49411d-config-data\") pod \"nova-scheduler-0\" (UID: \"ebb386ce-7ac7-465f-952e-ba006a49411d\") " pod="openstack/nova-scheduler-0" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.548408 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qslb2\" (UniqueName: \"kubernetes.io/projected/ebb386ce-7ac7-465f-952e-ba006a49411d-kube-api-access-qslb2\") pod \"nova-scheduler-0\" (UID: \"ebb386ce-7ac7-465f-952e-ba006a49411d\") " pod="openstack/nova-scheduler-0" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.581527 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.584558 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.591039 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.591825 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2869bf1-5702-4053-b414-f1fa8ba4f481-logs\") pod \"nova-api-0\" (UID: \"e2869bf1-5702-4053-b414-f1fa8ba4f481\") " pod="openstack/nova-api-0" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.591883 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2869bf1-5702-4053-b414-f1fa8ba4f481-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e2869bf1-5702-4053-b414-f1fa8ba4f481\") " pod="openstack/nova-api-0" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.591963 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78j7d\" (UniqueName: \"kubernetes.io/projected/e2869bf1-5702-4053-b414-f1fa8ba4f481-kube-api-access-78j7d\") pod \"nova-api-0\" (UID: \"e2869bf1-5702-4053-b414-f1fa8ba4f481\") " pod="openstack/nova-api-0" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.592080 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2869bf1-5702-4053-b414-f1fa8ba4f481-config-data\") pod \"nova-api-0\" (UID: \"e2869bf1-5702-4053-b414-f1fa8ba4f481\") " pod="openstack/nova-api-0" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.592407 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.620007 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.693747 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2869bf1-5702-4053-b414-f1fa8ba4f481-logs\") pod \"nova-api-0\" (UID: \"e2869bf1-5702-4053-b414-f1fa8ba4f481\") " pod="openstack/nova-api-0" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.693811 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2869bf1-5702-4053-b414-f1fa8ba4f481-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e2869bf1-5702-4053-b414-f1fa8ba4f481\") " pod="openstack/nova-api-0" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.693875 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78j7d\" (UniqueName: \"kubernetes.io/projected/e2869bf1-5702-4053-b414-f1fa8ba4f481-kube-api-access-78j7d\") pod \"nova-api-0\" (UID: \"e2869bf1-5702-4053-b414-f1fa8ba4f481\") " pod="openstack/nova-api-0" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.693943 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/613c7846-5718-4cec-86b1-fc129519d5d1-config-data\") pod \"nova-metadata-0\" (UID: \"613c7846-5718-4cec-86b1-fc129519d5d1\") " pod="openstack/nova-metadata-0" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.693975 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/613c7846-5718-4cec-86b1-fc129519d5d1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"613c7846-5718-4cec-86b1-fc129519d5d1\") " pod="openstack/nova-metadata-0" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.694011 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldtlt\" (UniqueName: \"kubernetes.io/projected/613c7846-5718-4cec-86b1-fc129519d5d1-kube-api-access-ldtlt\") pod \"nova-metadata-0\" (UID: \"613c7846-5718-4cec-86b1-fc129519d5d1\") " pod="openstack/nova-metadata-0" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.694048 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/613c7846-5718-4cec-86b1-fc129519d5d1-logs\") pod \"nova-metadata-0\" (UID: \"613c7846-5718-4cec-86b1-fc129519d5d1\") " pod="openstack/nova-metadata-0" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.694074 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2869bf1-5702-4053-b414-f1fa8ba4f481-config-data\") pod \"nova-api-0\" (UID: \"e2869bf1-5702-4053-b414-f1fa8ba4f481\") " pod="openstack/nova-api-0" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.694334 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2869bf1-5702-4053-b414-f1fa8ba4f481-logs\") pod \"nova-api-0\" (UID: \"e2869bf1-5702-4053-b414-f1fa8ba4f481\") " pod="openstack/nova-api-0" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.724887 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2869bf1-5702-4053-b414-f1fa8ba4f481-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e2869bf1-5702-4053-b414-f1fa8ba4f481\") " pod="openstack/nova-api-0" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.728889 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78j7d\" (UniqueName: \"kubernetes.io/projected/e2869bf1-5702-4053-b414-f1fa8ba4f481-kube-api-access-78j7d\") pod \"nova-api-0\" (UID: \"e2869bf1-5702-4053-b414-f1fa8ba4f481\") " pod="openstack/nova-api-0" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.741296 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2869bf1-5702-4053-b414-f1fa8ba4f481-config-data\") pod \"nova-api-0\" (UID: \"e2869bf1-5702-4053-b414-f1fa8ba4f481\") " pod="openstack/nova-api-0" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.795892 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/613c7846-5718-4cec-86b1-fc129519d5d1-config-data\") pod \"nova-metadata-0\" (UID: \"613c7846-5718-4cec-86b1-fc129519d5d1\") " pod="openstack/nova-metadata-0" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.795956 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/613c7846-5718-4cec-86b1-fc129519d5d1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"613c7846-5718-4cec-86b1-fc129519d5d1\") " pod="openstack/nova-metadata-0" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.796022 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldtlt\" (UniqueName: \"kubernetes.io/projected/613c7846-5718-4cec-86b1-fc129519d5d1-kube-api-access-ldtlt\") pod \"nova-metadata-0\" (UID: \"613c7846-5718-4cec-86b1-fc129519d5d1\") " pod="openstack/nova-metadata-0" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.796063 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/613c7846-5718-4cec-86b1-fc129519d5d1-logs\") pod \"nova-metadata-0\" (UID: \"613c7846-5718-4cec-86b1-fc129519d5d1\") " pod="openstack/nova-metadata-0" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.796693 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/613c7846-5718-4cec-86b1-fc129519d5d1-logs\") pod \"nova-metadata-0\" (UID: \"613c7846-5718-4cec-86b1-fc129519d5d1\") " pod="openstack/nova-metadata-0" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.802176 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/613c7846-5718-4cec-86b1-fc129519d5d1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"613c7846-5718-4cec-86b1-fc129519d5d1\") " pod="openstack/nova-metadata-0" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.805422 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/613c7846-5718-4cec-86b1-fc129519d5d1-config-data\") pod \"nova-metadata-0\" (UID: \"613c7846-5718-4cec-86b1-fc129519d5d1\") " pod="openstack/nova-metadata-0" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.832322 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.834379 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.848562 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.870545 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldtlt\" (UniqueName: \"kubernetes.io/projected/613c7846-5718-4cec-86b1-fc129519d5d1-kube-api-access-ldtlt\") pod \"nova-metadata-0\" (UID: \"613c7846-5718-4cec-86b1-fc129519d5d1\") " pod="openstack/nova-metadata-0" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.941290 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.967048 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.970081 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 13:57:25 crc kubenswrapper[4812]: I0216 13:57:25.985509 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-cqh8x"] Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.024985 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cswl8\" (UniqueName: \"kubernetes.io/projected/7759cc43-520e-4eb8-8911-fb01c660247c-kube-api-access-cswl8\") pod \"nova-cell1-novncproxy-0\" (UID: \"7759cc43-520e-4eb8-8911-fb01c660247c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.025117 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7759cc43-520e-4eb8-8911-fb01c660247c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7759cc43-520e-4eb8-8911-fb01c660247c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.025281 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7759cc43-520e-4eb8-8911-fb01c660247c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7759cc43-520e-4eb8-8911-fb01c660247c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.055956 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-cqh8x" Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.117318 4812 generic.go:334] "Generic (PLEG): container finished" podID="fb445434-4b77-4079-a14a-c21480d7bb4e" containerID="6e1e27fae621f2e94362e36ed877e3fe118c756e522d07bb23330406288055f5" exitCode=0 Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.117469 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5mlw" event={"ID":"fb445434-4b77-4079-a14a-c21480d7bb4e","Type":"ContainerDied","Data":"6e1e27fae621f2e94362e36ed877e3fe118c756e522d07bb23330406288055f5"} Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.135981 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7759cc43-520e-4eb8-8911-fb01c660247c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7759cc43-520e-4eb8-8911-fb01c660247c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.136467 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cswl8\" (UniqueName: \"kubernetes.io/projected/7759cc43-520e-4eb8-8911-fb01c660247c-kube-api-access-cswl8\") pod \"nova-cell1-novncproxy-0\" (UID: \"7759cc43-520e-4eb8-8911-fb01c660247c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.136914 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7759cc43-520e-4eb8-8911-fb01c660247c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7759cc43-520e-4eb8-8911-fb01c660247c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.148994 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7759cc43-520e-4eb8-8911-fb01c660247c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7759cc43-520e-4eb8-8911-fb01c660247c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.198403 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cswl8\" (UniqueName: \"kubernetes.io/projected/7759cc43-520e-4eb8-8911-fb01c660247c-kube-api-access-cswl8\") pod \"nova-cell1-novncproxy-0\" (UID: \"7759cc43-520e-4eb8-8911-fb01c660247c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.212394 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-cqh8x"] Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.214147 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7759cc43-520e-4eb8-8911-fb01c660247c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7759cc43-520e-4eb8-8911-fb01c660247c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.245172 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kt7q\" (UniqueName: \"kubernetes.io/projected/a3da896f-2c71-43dc-afdf-6cfc4c1b01ba-kube-api-access-7kt7q\") pod \"dnsmasq-dns-757b4f8459-cqh8x\" (UID: \"a3da896f-2c71-43dc-afdf-6cfc4c1b01ba\") " pod="openstack/dnsmasq-dns-757b4f8459-cqh8x" Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.246497 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3da896f-2c71-43dc-afdf-6cfc4c1b01ba-config\") pod \"dnsmasq-dns-757b4f8459-cqh8x\" (UID: \"a3da896f-2c71-43dc-afdf-6cfc4c1b01ba\") " pod="openstack/dnsmasq-dns-757b4f8459-cqh8x" Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.246682 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3da896f-2c71-43dc-afdf-6cfc4c1b01ba-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-cqh8x\" (UID: \"a3da896f-2c71-43dc-afdf-6cfc4c1b01ba\") " pod="openstack/dnsmasq-dns-757b4f8459-cqh8x" Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.247273 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3da896f-2c71-43dc-afdf-6cfc4c1b01ba-dns-svc\") pod \"dnsmasq-dns-757b4f8459-cqh8x\" (UID: \"a3da896f-2c71-43dc-afdf-6cfc4c1b01ba\") " pod="openstack/dnsmasq-dns-757b4f8459-cqh8x" Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.247374 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3da896f-2c71-43dc-afdf-6cfc4c1b01ba-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-cqh8x\" (UID: \"a3da896f-2c71-43dc-afdf-6cfc4c1b01ba\") " pod="openstack/dnsmasq-dns-757b4f8459-cqh8x" Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.247507 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3da896f-2c71-43dc-afdf-6cfc4c1b01ba-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-cqh8x\" (UID: \"a3da896f-2c71-43dc-afdf-6cfc4c1b01ba\") " pod="openstack/dnsmasq-dns-757b4f8459-cqh8x" Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.350073 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3da896f-2c71-43dc-afdf-6cfc4c1b01ba-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-cqh8x\" (UID: \"a3da896f-2c71-43dc-afdf-6cfc4c1b01ba\") " pod="openstack/dnsmasq-dns-757b4f8459-cqh8x" Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.350139 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3da896f-2c71-43dc-afdf-6cfc4c1b01ba-dns-svc\") pod \"dnsmasq-dns-757b4f8459-cqh8x\" (UID: \"a3da896f-2c71-43dc-afdf-6cfc4c1b01ba\") " pod="openstack/dnsmasq-dns-757b4f8459-cqh8x" Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.350189 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3da896f-2c71-43dc-afdf-6cfc4c1b01ba-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-cqh8x\" (UID: \"a3da896f-2c71-43dc-afdf-6cfc4c1b01ba\") " pod="openstack/dnsmasq-dns-757b4f8459-cqh8x" Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.350239 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3da896f-2c71-43dc-afdf-6cfc4c1b01ba-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-cqh8x\" (UID: \"a3da896f-2c71-43dc-afdf-6cfc4c1b01ba\") " pod="openstack/dnsmasq-dns-757b4f8459-cqh8x" Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.350472 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kt7q\" (UniqueName: \"kubernetes.io/projected/a3da896f-2c71-43dc-afdf-6cfc4c1b01ba-kube-api-access-7kt7q\") pod \"dnsmasq-dns-757b4f8459-cqh8x\" (UID: \"a3da896f-2c71-43dc-afdf-6cfc4c1b01ba\") " pod="openstack/dnsmasq-dns-757b4f8459-cqh8x" Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.350568 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3da896f-2c71-43dc-afdf-6cfc4c1b01ba-config\") pod \"dnsmasq-dns-757b4f8459-cqh8x\" (UID: \"a3da896f-2c71-43dc-afdf-6cfc4c1b01ba\") " pod="openstack/dnsmasq-dns-757b4f8459-cqh8x" Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.352004 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3da896f-2c71-43dc-afdf-6cfc4c1b01ba-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-cqh8x\" (UID: \"a3da896f-2c71-43dc-afdf-6cfc4c1b01ba\") " pod="openstack/dnsmasq-dns-757b4f8459-cqh8x" Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.352048 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3da896f-2c71-43dc-afdf-6cfc4c1b01ba-dns-svc\") pod \"dnsmasq-dns-757b4f8459-cqh8x\" (UID: \"a3da896f-2c71-43dc-afdf-6cfc4c1b01ba\") " pod="openstack/dnsmasq-dns-757b4f8459-cqh8x" Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.352335 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3da896f-2c71-43dc-afdf-6cfc4c1b01ba-config\") pod \"dnsmasq-dns-757b4f8459-cqh8x\" (UID: \"a3da896f-2c71-43dc-afdf-6cfc4c1b01ba\") " pod="openstack/dnsmasq-dns-757b4f8459-cqh8x" Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.352870 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3da896f-2c71-43dc-afdf-6cfc4c1b01ba-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-cqh8x\" (UID: \"a3da896f-2c71-43dc-afdf-6cfc4c1b01ba\") " pod="openstack/dnsmasq-dns-757b4f8459-cqh8x" Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.353010 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3da896f-2c71-43dc-afdf-6cfc4c1b01ba-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-cqh8x\" (UID: \"a3da896f-2c71-43dc-afdf-6cfc4c1b01ba\") " pod="openstack/dnsmasq-dns-757b4f8459-cqh8x" Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.387254 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kt7q\" (UniqueName: \"kubernetes.io/projected/a3da896f-2c71-43dc-afdf-6cfc4c1b01ba-kube-api-access-7kt7q\") pod \"dnsmasq-dns-757b4f8459-cqh8x\" (UID: \"a3da896f-2c71-43dc-afdf-6cfc4c1b01ba\") " pod="openstack/dnsmasq-dns-757b4f8459-cqh8x" Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.471666 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.541357 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-cqh8x" Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.610477 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-46bzm"] Feb 16 13:57:26 crc kubenswrapper[4812]: W0216 13:57:26.638761 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f356bcf_8719_4c4d_a9f8_b21489380dd8.slice/crio-6862892d567dc0569a1d1c395a8c7c00d3df9aae59224a7b834caf800a283ca6 WatchSource:0}: Error finding container 6862892d567dc0569a1d1c395a8c7c00d3df9aae59224a7b834caf800a283ca6: Status 404 returned error can't find the container with id 6862892d567dc0569a1d1c395a8c7c00d3df9aae59224a7b834caf800a283ca6 Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.803581 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:57:26 crc kubenswrapper[4812]: W0216 13:57:26.816412 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebb386ce_7ac7_465f_952e_ba006a49411d.slice/crio-241ed3b6131904c64ab08dcceabf812083d5eb182aba1bb53a9e1bf74276494b WatchSource:0}: Error finding container 241ed3b6131904c64ab08dcceabf812083d5eb182aba1bb53a9e1bf74276494b: Status 404 returned error can't find the container with id 241ed3b6131904c64ab08dcceabf812083d5eb182aba1bb53a9e1bf74276494b Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.863763 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4snbn"] Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.865793 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4snbn" Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.869378 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.869775 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.885682 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-4snbn\" (UID: \"3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd\") " pod="openstack/nova-cell1-conductor-db-sync-4snbn" Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.885744 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd-scripts\") pod \"nova-cell1-conductor-db-sync-4snbn\" (UID: \"3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd\") " pod="openstack/nova-cell1-conductor-db-sync-4snbn" Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.885785 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd-config-data\") pod \"nova-cell1-conductor-db-sync-4snbn\" (UID: \"3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd\") " pod="openstack/nova-cell1-conductor-db-sync-4snbn" Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.886091 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h59nv\" (UniqueName: \"kubernetes.io/projected/3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd-kube-api-access-h59nv\") pod \"nova-cell1-conductor-db-sync-4snbn\" (UID: \"3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd\") " pod="openstack/nova-cell1-conductor-db-sync-4snbn" Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.900721 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4snbn"] Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.988379 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-4snbn\" (UID: \"3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd\") " pod="openstack/nova-cell1-conductor-db-sync-4snbn" Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.988514 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd-scripts\") pod \"nova-cell1-conductor-db-sync-4snbn\" (UID: \"3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd\") " pod="openstack/nova-cell1-conductor-db-sync-4snbn" Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.988588 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd-config-data\") pod \"nova-cell1-conductor-db-sync-4snbn\" (UID: \"3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd\") " pod="openstack/nova-cell1-conductor-db-sync-4snbn" Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.993043 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h59nv\" (UniqueName: \"kubernetes.io/projected/3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd-kube-api-access-h59nv\") pod \"nova-cell1-conductor-db-sync-4snbn\" (UID: \"3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd\") " pod="openstack/nova-cell1-conductor-db-sync-4snbn" Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.996116 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd-scripts\") pod \"nova-cell1-conductor-db-sync-4snbn\" (UID: \"3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd\") " pod="openstack/nova-cell1-conductor-db-sync-4snbn" Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.997968 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd-config-data\") pod \"nova-cell1-conductor-db-sync-4snbn\" (UID: \"3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd\") " pod="openstack/nova-cell1-conductor-db-sync-4snbn" Feb 16 13:57:26 crc kubenswrapper[4812]: I0216 13:57:26.998173 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-4snbn\" (UID: \"3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd\") " pod="openstack/nova-cell1-conductor-db-sync-4snbn" Feb 16 13:57:27 crc kubenswrapper[4812]: I0216 13:57:27.009383 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:57:27 crc kubenswrapper[4812]: I0216 13:57:27.014785 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h59nv\" (UniqueName: \"kubernetes.io/projected/3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd-kube-api-access-h59nv\") pod \"nova-cell1-conductor-db-sync-4snbn\" (UID: \"3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd\") " pod="openstack/nova-cell1-conductor-db-sync-4snbn" Feb 16 13:57:27 crc kubenswrapper[4812]: I0216 13:57:27.164159 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-46bzm" event={"ID":"9f356bcf-8719-4c4d-a9f8-b21489380dd8","Type":"ContainerStarted","Data":"6862892d567dc0569a1d1c395a8c7c00d3df9aae59224a7b834caf800a283ca6"} Feb 16 13:57:27 crc kubenswrapper[4812]: I0216 13:57:27.166606 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e2869bf1-5702-4053-b414-f1fa8ba4f481","Type":"ContainerStarted","Data":"b450a82a7848b296845287788b9ae8a6444d49b6015b86367e6edac25f3e0428"} Feb 16 13:57:27 crc kubenswrapper[4812]: I0216 13:57:27.178038 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ebb386ce-7ac7-465f-952e-ba006a49411d","Type":"ContainerStarted","Data":"241ed3b6131904c64ab08dcceabf812083d5eb182aba1bb53a9e1bf74276494b"} Feb 16 13:57:27 crc kubenswrapper[4812]: W0216 13:57:27.201555 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod613c7846_5718_4cec_86b1_fc129519d5d1.slice/crio-75adcd17f4096ddc8120508ce98f1de90b3b7026d2d8d75e293f210a5d1581a3 WatchSource:0}: Error finding container 75adcd17f4096ddc8120508ce98f1de90b3b7026d2d8d75e293f210a5d1581a3: Status 404 returned error can't find the container with id 75adcd17f4096ddc8120508ce98f1de90b3b7026d2d8d75e293f210a5d1581a3 Feb 16 13:57:27 crc kubenswrapper[4812]: W0216 13:57:27.214358 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7759cc43_520e_4eb8_8911_fb01c660247c.slice/crio-f4780dcbe4289ee1314ee07975f745079317db27842c444acdd13dfd05cb9ce9 WatchSource:0}: Error finding container f4780dcbe4289ee1314ee07975f745079317db27842c444acdd13dfd05cb9ce9: Status 404 returned error can't find the container with id f4780dcbe4289ee1314ee07975f745079317db27842c444acdd13dfd05cb9ce9 Feb 16 13:57:27 crc kubenswrapper[4812]: I0216 13:57:27.216268 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:57:27 crc kubenswrapper[4812]: I0216 13:57:27.238440 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 13:57:27 crc kubenswrapper[4812]: I0216 13:57:27.248347 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4snbn" Feb 16 13:57:27 crc kubenswrapper[4812]: I0216 13:57:27.458672 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-cqh8x"] Feb 16 13:57:27 crc kubenswrapper[4812]: W0216 13:57:27.482513 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3da896f_2c71_43dc_afdf_6cfc4c1b01ba.slice/crio-3e5d26d5035ab43a38b3338350df42d468e6d7b790917f50990553b8b6780092 WatchSource:0}: Error finding container 3e5d26d5035ab43a38b3338350df42d468e6d7b790917f50990553b8b6780092: Status 404 returned error can't find the container with id 3e5d26d5035ab43a38b3338350df42d468e6d7b790917f50990553b8b6780092 Feb 16 13:57:27 crc kubenswrapper[4812]: I0216 13:57:27.774863 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4snbn"] Feb 16 13:57:27 crc kubenswrapper[4812]: W0216 13:57:27.810175 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fd87c54_c4b6_4aaf_9c67_31b1bf2e43bd.slice/crio-e9c4644b80119f609c18a297d2636a42bf16348c731903425788c942ef825a49 WatchSource:0}: Error finding container e9c4644b80119f609c18a297d2636a42bf16348c731903425788c942ef825a49: Status 404 returned error can't find the container with id e9c4644b80119f609c18a297d2636a42bf16348c731903425788c942ef825a49 Feb 16 13:57:28 crc kubenswrapper[4812]: I0216 13:57:28.212020 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5mlw" event={"ID":"fb445434-4b77-4079-a14a-c21480d7bb4e","Type":"ContainerStarted","Data":"54ce1aae4f76316ef2b0af968ec6c748427dd807d57db5179e5f6a5bb1b39387"} Feb 16 13:57:28 crc kubenswrapper[4812]: I0216 13:57:28.239914 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4snbn" event={"ID":"3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd","Type":"ContainerStarted","Data":"66dba176812ac361fc65ee2a48bea40acb3b46a5825e514c2c7c4d21aea33468"} Feb 16 13:57:28 crc kubenswrapper[4812]: I0216 13:57:28.239987 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4snbn" event={"ID":"3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd","Type":"ContainerStarted","Data":"e9c4644b80119f609c18a297d2636a42bf16348c731903425788c942ef825a49"} Feb 16 13:57:28 crc kubenswrapper[4812]: I0216 13:57:28.248075 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b5mlw" podStartSLOduration=3.3944125 podStartE2EDuration="6.248040905s" podCreationTimestamp="2026-02-16 13:57:22 +0000 UTC" firstStartedPulling="2026-02-16 13:57:24.006032574 +0000 UTC m=+1533.070363275" lastFinishedPulling="2026-02-16 13:57:26.859660979 +0000 UTC m=+1535.923991680" observedRunningTime="2026-02-16 13:57:28.240243569 +0000 UTC m=+1537.304574290" watchObservedRunningTime="2026-02-16 13:57:28.248040905 +0000 UTC m=+1537.312371606" Feb 16 13:57:28 crc kubenswrapper[4812]: I0216 13:57:28.275290 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-4snbn" podStartSLOduration=2.275259396 podStartE2EDuration="2.275259396s" podCreationTimestamp="2026-02-16 13:57:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:57:28.261304381 +0000 UTC m=+1537.325635082" watchObservedRunningTime="2026-02-16 13:57:28.275259396 +0000 UTC m=+1537.339590097" Feb 16 13:57:28 crc kubenswrapper[4812]: I0216 13:57:28.289744 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-46bzm" event={"ID":"9f356bcf-8719-4c4d-a9f8-b21489380dd8","Type":"ContainerStarted","Data":"828887c156eb0c0a116591a211bcfb060d33115558755f9c952c246f28a2e6c3"} Feb 16 13:57:28 crc kubenswrapper[4812]: I0216 13:57:28.304677 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"613c7846-5718-4cec-86b1-fc129519d5d1","Type":"ContainerStarted","Data":"75adcd17f4096ddc8120508ce98f1de90b3b7026d2d8d75e293f210a5d1581a3"} Feb 16 13:57:28 crc kubenswrapper[4812]: I0216 13:57:28.306824 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7759cc43-520e-4eb8-8911-fb01c660247c","Type":"ContainerStarted","Data":"f4780dcbe4289ee1314ee07975f745079317db27842c444acdd13dfd05cb9ce9"} Feb 16 13:57:28 crc kubenswrapper[4812]: I0216 13:57:28.308912 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-cqh8x" event={"ID":"a3da896f-2c71-43dc-afdf-6cfc4c1b01ba","Type":"ContainerStarted","Data":"3e5d26d5035ab43a38b3338350df42d468e6d7b790917f50990553b8b6780092"} Feb 16 13:57:28 crc kubenswrapper[4812]: I0216 13:57:28.340407 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-46bzm" podStartSLOduration=3.340373309 podStartE2EDuration="3.340373309s" podCreationTimestamp="2026-02-16 13:57:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:57:28.315531627 +0000 UTC m=+1537.379862328" watchObservedRunningTime="2026-02-16 13:57:28.340373309 +0000 UTC m=+1537.404704000" Feb 16 13:57:29 crc kubenswrapper[4812]: I0216 13:57:29.338322 4812 generic.go:334] "Generic (PLEG): container finished" podID="a3da896f-2c71-43dc-afdf-6cfc4c1b01ba" containerID="16b65f67e955ceea3558aa49cecd4274a4865f762bafce0feb7ca8d1bc33280d" exitCode=0 Feb 16 13:57:29 crc kubenswrapper[4812]: I0216 13:57:29.340042 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-cqh8x" event={"ID":"a3da896f-2c71-43dc-afdf-6cfc4c1b01ba","Type":"ContainerDied","Data":"16b65f67e955ceea3558aa49cecd4274a4865f762bafce0feb7ca8d1bc33280d"} Feb 16 13:57:29 crc kubenswrapper[4812]: I0216 13:57:29.340089 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-cqh8x" event={"ID":"a3da896f-2c71-43dc-afdf-6cfc4c1b01ba","Type":"ContainerStarted","Data":"7b85905d2b7bbcd9668614469409330a5a5526964f70094610fc93361a50a223"} Feb 16 13:57:29 crc kubenswrapper[4812]: I0216 13:57:29.341327 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-cqh8x" Feb 16 13:57:29 crc kubenswrapper[4812]: I0216 13:57:29.379370 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-cqh8x" podStartSLOduration=4.379333008 podStartE2EDuration="4.379333008s" podCreationTimestamp="2026-02-16 13:57:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:57:29.36977549 +0000 UTC m=+1538.434106191" watchObservedRunningTime="2026-02-16 13:57:29.379333008 +0000 UTC m=+1538.443663709" Feb 16 13:57:29 crc kubenswrapper[4812]: I0216 13:57:29.820168 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:57:29 crc kubenswrapper[4812]: I0216 13:57:29.853133 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 13:57:32 crc kubenswrapper[4812]: I0216 13:57:32.425183 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e2869bf1-5702-4053-b414-f1fa8ba4f481","Type":"ContainerStarted","Data":"1b9b2c87e012b6b09c39bf60b69e825713d9c5ff496357c4f80f67add3eb665e"} Feb 16 13:57:32 crc kubenswrapper[4812]: I0216 13:57:32.428183 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ebb386ce-7ac7-465f-952e-ba006a49411d","Type":"ContainerStarted","Data":"85d49ad791cd4ab93dbee97194bd2ff6b594c64715dc8ed1b587507e69ff64e0"} Feb 16 13:57:32 crc kubenswrapper[4812]: I0216 13:57:32.440225 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"613c7846-5718-4cec-86b1-fc129519d5d1","Type":"ContainerStarted","Data":"77d3cab484f85d283107583b496e73c37ea064a8bc7ece111193217912931767"} Feb 16 13:57:32 crc kubenswrapper[4812]: I0216 13:57:32.450916 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7759cc43-520e-4eb8-8911-fb01c660247c","Type":"ContainerStarted","Data":"95731e41b11084dd285640b77c03e99b945351016aa9c1fd9bc6094a0e1efae9"} Feb 16 13:57:32 crc kubenswrapper[4812]: I0216 13:57:32.451410 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="7759cc43-520e-4eb8-8911-fb01c660247c" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://95731e41b11084dd285640b77c03e99b945351016aa9c1fd9bc6094a0e1efae9" gracePeriod=30 Feb 16 13:57:32 crc kubenswrapper[4812]: I0216 13:57:32.460031 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.79338201 podStartE2EDuration="7.460003074s" podCreationTimestamp="2026-02-16 13:57:25 +0000 UTC" firstStartedPulling="2026-02-16 13:57:26.838769092 +0000 UTC m=+1535.903099803" lastFinishedPulling="2026-02-16 13:57:31.505390166 +0000 UTC m=+1540.569720867" observedRunningTime="2026-02-16 13:57:32.451426264 +0000 UTC m=+1541.515756975" watchObservedRunningTime="2026-02-16 13:57:32.460003074 +0000 UTC m=+1541.524333775" Feb 16 13:57:32 crc kubenswrapper[4812]: I0216 13:57:32.480544 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.1955666799999998 podStartE2EDuration="7.48051632s" podCreationTimestamp="2026-02-16 13:57:25 +0000 UTC" firstStartedPulling="2026-02-16 13:57:27.220994182 +0000 UTC m=+1536.285324883" lastFinishedPulling="2026-02-16 13:57:31.505943822 +0000 UTC m=+1540.570274523" observedRunningTime="2026-02-16 13:57:32.472898988 +0000 UTC m=+1541.537229689" watchObservedRunningTime="2026-02-16 13:57:32.48051632 +0000 UTC m=+1541.544847021" Feb 16 13:57:32 crc kubenswrapper[4812]: I0216 13:57:32.600933 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b5mlw" Feb 16 13:57:32 crc kubenswrapper[4812]: I0216 13:57:32.601009 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b5mlw" Feb 16 13:57:32 crc kubenswrapper[4812]: I0216 13:57:32.667074 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b5mlw" Feb 16 13:57:33 crc kubenswrapper[4812]: I0216 13:57:33.469866 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e2869bf1-5702-4053-b414-f1fa8ba4f481","Type":"ContainerStarted","Data":"d739f57aa156d8a09a1a68eee47ffe00c31d85240d6bbc24680ee30418d8fa37"} Feb 16 13:57:33 crc kubenswrapper[4812]: I0216 13:57:33.473233 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"613c7846-5718-4cec-86b1-fc129519d5d1","Type":"ContainerStarted","Data":"f3a1f83aa38b235ed5616988f65151fd913e82d3e774285adfbbbd537992548b"} Feb 16 13:57:33 crc kubenswrapper[4812]: I0216 13:57:33.475140 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="613c7846-5718-4cec-86b1-fc129519d5d1" containerName="nova-metadata-log" containerID="cri-o://77d3cab484f85d283107583b496e73c37ea064a8bc7ece111193217912931767" gracePeriod=30 Feb 16 13:57:33 crc kubenswrapper[4812]: I0216 13:57:33.475492 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="613c7846-5718-4cec-86b1-fc129519d5d1" containerName="nova-metadata-metadata" containerID="cri-o://f3a1f83aa38b235ed5616988f65151fd913e82d3e774285adfbbbd537992548b" gracePeriod=30 Feb 16 13:57:33 crc kubenswrapper[4812]: I0216 13:57:33.505748 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.985140609 podStartE2EDuration="8.505704268s" podCreationTimestamp="2026-02-16 13:57:25 +0000 UTC" firstStartedPulling="2026-02-16 13:57:26.985402014 +0000 UTC m=+1536.049732715" lastFinishedPulling="2026-02-16 13:57:31.505965673 +0000 UTC m=+1540.570296374" observedRunningTime="2026-02-16 13:57:33.490963449 +0000 UTC m=+1542.555294150" watchObservedRunningTime="2026-02-16 13:57:33.505704268 +0000 UTC m=+1542.570034989" Feb 16 13:57:33 crc kubenswrapper[4812]: I0216 13:57:33.532024 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.224284911 podStartE2EDuration="8.531988722s" podCreationTimestamp="2026-02-16 13:57:25 +0000 UTC" firstStartedPulling="2026-02-16 13:57:27.204523404 +0000 UTC m=+1536.268854105" lastFinishedPulling="2026-02-16 13:57:31.512227215 +0000 UTC m=+1540.576557916" observedRunningTime="2026-02-16 13:57:33.516299436 +0000 UTC m=+1542.580630137" watchObservedRunningTime="2026-02-16 13:57:33.531988722 +0000 UTC m=+1542.596319423" Feb 16 13:57:33 crc kubenswrapper[4812]: I0216 13:57:33.549537 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b5mlw" Feb 16 13:57:33 crc kubenswrapper[4812]: I0216 13:57:33.639944 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b5mlw"] Feb 16 13:57:34 crc kubenswrapper[4812]: I0216 13:57:34.522925 4812 generic.go:334] "Generic (PLEG): container finished" podID="613c7846-5718-4cec-86b1-fc129519d5d1" containerID="f3a1f83aa38b235ed5616988f65151fd913e82d3e774285adfbbbd537992548b" exitCode=0 Feb 16 13:57:34 crc kubenswrapper[4812]: I0216 13:57:34.523304 4812 generic.go:334] "Generic (PLEG): container finished" podID="613c7846-5718-4cec-86b1-fc129519d5d1" containerID="77d3cab484f85d283107583b496e73c37ea064a8bc7ece111193217912931767" exitCode=143 Feb 16 13:57:34 crc kubenswrapper[4812]: I0216 13:57:34.523109 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"613c7846-5718-4cec-86b1-fc129519d5d1","Type":"ContainerDied","Data":"f3a1f83aa38b235ed5616988f65151fd913e82d3e774285adfbbbd537992548b"} Feb 16 13:57:34 crc kubenswrapper[4812]: I0216 13:57:34.524180 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"613c7846-5718-4cec-86b1-fc129519d5d1","Type":"ContainerDied","Data":"77d3cab484f85d283107583b496e73c37ea064a8bc7ece111193217912931767"} Feb 16 13:57:34 crc kubenswrapper[4812]: I0216 13:57:34.748054 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 13:57:34 crc kubenswrapper[4812]: I0216 13:57:34.830481 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/613c7846-5718-4cec-86b1-fc129519d5d1-config-data\") pod \"613c7846-5718-4cec-86b1-fc129519d5d1\" (UID: \"613c7846-5718-4cec-86b1-fc129519d5d1\") " Feb 16 13:57:34 crc kubenswrapper[4812]: I0216 13:57:34.830719 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/613c7846-5718-4cec-86b1-fc129519d5d1-logs\") pod \"613c7846-5718-4cec-86b1-fc129519d5d1\" (UID: \"613c7846-5718-4cec-86b1-fc129519d5d1\") " Feb 16 13:57:34 crc kubenswrapper[4812]: I0216 13:57:34.830900 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldtlt\" (UniqueName: \"kubernetes.io/projected/613c7846-5718-4cec-86b1-fc129519d5d1-kube-api-access-ldtlt\") pod \"613c7846-5718-4cec-86b1-fc129519d5d1\" (UID: \"613c7846-5718-4cec-86b1-fc129519d5d1\") " Feb 16 13:57:34 crc kubenswrapper[4812]: I0216 13:57:34.831057 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/613c7846-5718-4cec-86b1-fc129519d5d1-combined-ca-bundle\") pod \"613c7846-5718-4cec-86b1-fc129519d5d1\" (UID: \"613c7846-5718-4cec-86b1-fc129519d5d1\") " Feb 16 13:57:34 crc kubenswrapper[4812]: I0216 13:57:34.831822 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/613c7846-5718-4cec-86b1-fc129519d5d1-logs" (OuterVolumeSpecName: "logs") pod "613c7846-5718-4cec-86b1-fc129519d5d1" (UID: "613c7846-5718-4cec-86b1-fc129519d5d1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:57:34 crc kubenswrapper[4812]: I0216 13:57:34.841839 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/613c7846-5718-4cec-86b1-fc129519d5d1-kube-api-access-ldtlt" (OuterVolumeSpecName: "kube-api-access-ldtlt") pod "613c7846-5718-4cec-86b1-fc129519d5d1" (UID: "613c7846-5718-4cec-86b1-fc129519d5d1"). InnerVolumeSpecName "kube-api-access-ldtlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:57:34 crc kubenswrapper[4812]: I0216 13:57:34.866033 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/613c7846-5718-4cec-86b1-fc129519d5d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "613c7846-5718-4cec-86b1-fc129519d5d1" (UID: "613c7846-5718-4cec-86b1-fc129519d5d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:57:34 crc kubenswrapper[4812]: E0216 13:57:34.882282 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 13:57:34 crc kubenswrapper[4812]: I0216 13:57:34.894629 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/613c7846-5718-4cec-86b1-fc129519d5d1-config-data" (OuterVolumeSpecName: "config-data") pod "613c7846-5718-4cec-86b1-fc129519d5d1" (UID: "613c7846-5718-4cec-86b1-fc129519d5d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:57:34 crc kubenswrapper[4812]: I0216 13:57:34.934268 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldtlt\" (UniqueName: \"kubernetes.io/projected/613c7846-5718-4cec-86b1-fc129519d5d1-kube-api-access-ldtlt\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:34 crc kubenswrapper[4812]: I0216 13:57:34.934322 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/613c7846-5718-4cec-86b1-fc129519d5d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:34 crc kubenswrapper[4812]: I0216 13:57:34.934338 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/613c7846-5718-4cec-86b1-fc129519d5d1-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:34 crc kubenswrapper[4812]: I0216 13:57:34.934355 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/613c7846-5718-4cec-86b1-fc129519d5d1-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:35 crc kubenswrapper[4812]: I0216 13:57:35.538433 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b5mlw" podUID="fb445434-4b77-4079-a14a-c21480d7bb4e" containerName="registry-server" containerID="cri-o://54ce1aae4f76316ef2b0af968ec6c748427dd807d57db5179e5f6a5bb1b39387" gracePeriod=2 Feb 16 13:57:35 crc kubenswrapper[4812]: I0216 13:57:35.539412 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 13:57:35 crc kubenswrapper[4812]: I0216 13:57:35.539444 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"613c7846-5718-4cec-86b1-fc129519d5d1","Type":"ContainerDied","Data":"75adcd17f4096ddc8120508ce98f1de90b3b7026d2d8d75e293f210a5d1581a3"} Feb 16 13:57:35 crc kubenswrapper[4812]: I0216 13:57:35.542082 4812 scope.go:117] "RemoveContainer" containerID="f3a1f83aa38b235ed5616988f65151fd913e82d3e774285adfbbbd537992548b" Feb 16 13:57:35 crc kubenswrapper[4812]: I0216 13:57:35.590916 4812 scope.go:117] "RemoveContainer" containerID="77d3cab484f85d283107583b496e73c37ea064a8bc7ece111193217912931767" Feb 16 13:57:35 crc kubenswrapper[4812]: I0216 13:57:35.593697 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 13:57:35 crc kubenswrapper[4812]: I0216 13:57:35.597695 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 13:57:35 crc kubenswrapper[4812]: I0216 13:57:35.601696 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:57:35 crc kubenswrapper[4812]: I0216 13:57:35.620178 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:57:35 crc kubenswrapper[4812]: I0216 13:57:35.642963 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 13:57:35 crc kubenswrapper[4812]: I0216 13:57:35.661572 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:57:35 crc kubenswrapper[4812]: E0216 13:57:35.662144 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="613c7846-5718-4cec-86b1-fc129519d5d1" containerName="nova-metadata-metadata" Feb 16 13:57:35 crc kubenswrapper[4812]: I0216 13:57:35.662177 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="613c7846-5718-4cec-86b1-fc129519d5d1" containerName="nova-metadata-metadata" Feb 16 13:57:35 crc kubenswrapper[4812]: E0216 13:57:35.662225 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="613c7846-5718-4cec-86b1-fc129519d5d1" containerName="nova-metadata-log" Feb 16 13:57:35 crc kubenswrapper[4812]: I0216 13:57:35.662235 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="613c7846-5718-4cec-86b1-fc129519d5d1" containerName="nova-metadata-log" Feb 16 13:57:35 crc kubenswrapper[4812]: I0216 13:57:35.662491 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="613c7846-5718-4cec-86b1-fc129519d5d1" containerName="nova-metadata-metadata" Feb 16 13:57:35 crc kubenswrapper[4812]: I0216 13:57:35.662511 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="613c7846-5718-4cec-86b1-fc129519d5d1" containerName="nova-metadata-log" Feb 16 13:57:35 crc kubenswrapper[4812]: I0216 13:57:35.663799 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 13:57:35 crc kubenswrapper[4812]: I0216 13:57:35.667133 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 13:57:35 crc kubenswrapper[4812]: I0216 13:57:35.672292 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 13:57:35 crc kubenswrapper[4812]: I0216 13:57:35.675197 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:57:35 crc kubenswrapper[4812]: I0216 13:57:35.754787 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/325ea694-3236-4385-bdb2-2796db54e8a5-logs\") pod \"nova-metadata-0\" (UID: \"325ea694-3236-4385-bdb2-2796db54e8a5\") " pod="openstack/nova-metadata-0" Feb 16 13:57:35 crc kubenswrapper[4812]: I0216 13:57:35.754906 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/325ea694-3236-4385-bdb2-2796db54e8a5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"325ea694-3236-4385-bdb2-2796db54e8a5\") " pod="openstack/nova-metadata-0" Feb 16 13:57:35 crc kubenswrapper[4812]: I0216 13:57:35.754945 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmfdq\" (UniqueName: \"kubernetes.io/projected/325ea694-3236-4385-bdb2-2796db54e8a5-kube-api-access-nmfdq\") pod \"nova-metadata-0\" (UID: \"325ea694-3236-4385-bdb2-2796db54e8a5\") " pod="openstack/nova-metadata-0" Feb 16 13:57:35 crc kubenswrapper[4812]: I0216 13:57:35.755045 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/325ea694-3236-4385-bdb2-2796db54e8a5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"325ea694-3236-4385-bdb2-2796db54e8a5\") " pod="openstack/nova-metadata-0" Feb 16 13:57:35 crc kubenswrapper[4812]: I0216 13:57:35.755075 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/325ea694-3236-4385-bdb2-2796db54e8a5-config-data\") pod \"nova-metadata-0\" (UID: \"325ea694-3236-4385-bdb2-2796db54e8a5\") " pod="openstack/nova-metadata-0" Feb 16 13:57:35 crc kubenswrapper[4812]: I0216 13:57:35.858295 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/325ea694-3236-4385-bdb2-2796db54e8a5-logs\") pod \"nova-metadata-0\" (UID: \"325ea694-3236-4385-bdb2-2796db54e8a5\") " pod="openstack/nova-metadata-0" Feb 16 13:57:35 crc kubenswrapper[4812]: I0216 13:57:35.858470 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/325ea694-3236-4385-bdb2-2796db54e8a5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"325ea694-3236-4385-bdb2-2796db54e8a5\") " pod="openstack/nova-metadata-0" Feb 16 13:57:35 crc kubenswrapper[4812]: I0216 13:57:35.858528 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmfdq\" (UniqueName: \"kubernetes.io/projected/325ea694-3236-4385-bdb2-2796db54e8a5-kube-api-access-nmfdq\") pod \"nova-metadata-0\" (UID: \"325ea694-3236-4385-bdb2-2796db54e8a5\") " pod="openstack/nova-metadata-0" Feb 16 13:57:35 crc kubenswrapper[4812]: I0216 13:57:35.858647 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/325ea694-3236-4385-bdb2-2796db54e8a5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"325ea694-3236-4385-bdb2-2796db54e8a5\") " pod="openstack/nova-metadata-0" Feb 16 13:57:35 crc kubenswrapper[4812]: I0216 13:57:35.858692 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/325ea694-3236-4385-bdb2-2796db54e8a5-config-data\") pod \"nova-metadata-0\" (UID: \"325ea694-3236-4385-bdb2-2796db54e8a5\") " pod="openstack/nova-metadata-0" Feb 16 13:57:35 crc kubenswrapper[4812]: I0216 13:57:35.862749 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/325ea694-3236-4385-bdb2-2796db54e8a5-logs\") pod \"nova-metadata-0\" (UID: \"325ea694-3236-4385-bdb2-2796db54e8a5\") " pod="openstack/nova-metadata-0" Feb 16 13:57:35 crc kubenswrapper[4812]: I0216 13:57:35.870381 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/325ea694-3236-4385-bdb2-2796db54e8a5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"325ea694-3236-4385-bdb2-2796db54e8a5\") " pod="openstack/nova-metadata-0" Feb 16 13:57:35 crc kubenswrapper[4812]: I0216 13:57:35.887220 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/325ea694-3236-4385-bdb2-2796db54e8a5-config-data\") pod \"nova-metadata-0\" (UID: \"325ea694-3236-4385-bdb2-2796db54e8a5\") " pod="openstack/nova-metadata-0" Feb 16 13:57:35 crc kubenswrapper[4812]: I0216 13:57:35.891773 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/325ea694-3236-4385-bdb2-2796db54e8a5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"325ea694-3236-4385-bdb2-2796db54e8a5\") " pod="openstack/nova-metadata-0" Feb 16 13:57:35 crc kubenswrapper[4812]: I0216 13:57:35.898461 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmfdq\" (UniqueName: \"kubernetes.io/projected/325ea694-3236-4385-bdb2-2796db54e8a5-kube-api-access-nmfdq\") pod \"nova-metadata-0\" (UID: \"325ea694-3236-4385-bdb2-2796db54e8a5\") " pod="openstack/nova-metadata-0" Feb 16 13:57:35 crc kubenswrapper[4812]: I0216 13:57:35.906714 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="613c7846-5718-4cec-86b1-fc129519d5d1" path="/var/lib/kubelet/pods/613c7846-5718-4cec-86b1-fc129519d5d1/volumes" Feb 16 13:57:35 crc kubenswrapper[4812]: I0216 13:57:35.941653 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 13:57:35 crc kubenswrapper[4812]: I0216 13:57:35.942129 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 13:57:36 crc kubenswrapper[4812]: I0216 13:57:36.064111 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 13:57:36 crc kubenswrapper[4812]: I0216 13:57:36.278740 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b5mlw" Feb 16 13:57:36 crc kubenswrapper[4812]: I0216 13:57:36.396908 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb445434-4b77-4079-a14a-c21480d7bb4e-utilities\") pod \"fb445434-4b77-4079-a14a-c21480d7bb4e\" (UID: \"fb445434-4b77-4079-a14a-c21480d7bb4e\") " Feb 16 13:57:36 crc kubenswrapper[4812]: I0216 13:57:36.397045 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb445434-4b77-4079-a14a-c21480d7bb4e-catalog-content\") pod \"fb445434-4b77-4079-a14a-c21480d7bb4e\" (UID: \"fb445434-4b77-4079-a14a-c21480d7bb4e\") " Feb 16 13:57:36 crc kubenswrapper[4812]: I0216 13:57:36.397512 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srtxj\" (UniqueName: \"kubernetes.io/projected/fb445434-4b77-4079-a14a-c21480d7bb4e-kube-api-access-srtxj\") pod \"fb445434-4b77-4079-a14a-c21480d7bb4e\" (UID: \"fb445434-4b77-4079-a14a-c21480d7bb4e\") " Feb 16 13:57:36 crc kubenswrapper[4812]: I0216 13:57:36.398855 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb445434-4b77-4079-a14a-c21480d7bb4e-utilities" (OuterVolumeSpecName: "utilities") pod "fb445434-4b77-4079-a14a-c21480d7bb4e" (UID: "fb445434-4b77-4079-a14a-c21480d7bb4e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:57:36 crc kubenswrapper[4812]: I0216 13:57:36.417916 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb445434-4b77-4079-a14a-c21480d7bb4e-kube-api-access-srtxj" (OuterVolumeSpecName: "kube-api-access-srtxj") pod "fb445434-4b77-4079-a14a-c21480d7bb4e" (UID: "fb445434-4b77-4079-a14a-c21480d7bb4e"). InnerVolumeSpecName "kube-api-access-srtxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:57:36 crc kubenswrapper[4812]: I0216 13:57:36.473341 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:57:36 crc kubenswrapper[4812]: I0216 13:57:36.478355 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb445434-4b77-4079-a14a-c21480d7bb4e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fb445434-4b77-4079-a14a-c21480d7bb4e" (UID: "fb445434-4b77-4079-a14a-c21480d7bb4e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:57:36 crc kubenswrapper[4812]: I0216 13:57:36.501156 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srtxj\" (UniqueName: \"kubernetes.io/projected/fb445434-4b77-4079-a14a-c21480d7bb4e-kube-api-access-srtxj\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:36 crc kubenswrapper[4812]: I0216 13:57:36.501213 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb445434-4b77-4079-a14a-c21480d7bb4e-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:36 crc kubenswrapper[4812]: I0216 13:57:36.501225 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb445434-4b77-4079-a14a-c21480d7bb4e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:36 crc kubenswrapper[4812]: I0216 13:57:36.544025 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-cqh8x" Feb 16 13:57:36 crc kubenswrapper[4812]: I0216 13:57:36.576821 4812 generic.go:334] "Generic (PLEG): container finished" podID="fb445434-4b77-4079-a14a-c21480d7bb4e" containerID="54ce1aae4f76316ef2b0af968ec6c748427dd807d57db5179e5f6a5bb1b39387" exitCode=0 Feb 16 13:57:36 crc kubenswrapper[4812]: I0216 13:57:36.576962 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5mlw" event={"ID":"fb445434-4b77-4079-a14a-c21480d7bb4e","Type":"ContainerDied","Data":"54ce1aae4f76316ef2b0af968ec6c748427dd807d57db5179e5f6a5bb1b39387"} Feb 16 13:57:36 crc kubenswrapper[4812]: I0216 13:57:36.577023 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5mlw" event={"ID":"fb445434-4b77-4079-a14a-c21480d7bb4e","Type":"ContainerDied","Data":"0c3ffa057e9d71c08c11ee71cc2969f1a9d97459b0461f4892bacb864367254a"} Feb 16 13:57:36 crc kubenswrapper[4812]: I0216 13:57:36.577029 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b5mlw" Feb 16 13:57:36 crc kubenswrapper[4812]: I0216 13:57:36.577050 4812 scope.go:117] "RemoveContainer" containerID="54ce1aae4f76316ef2b0af968ec6c748427dd807d57db5179e5f6a5bb1b39387" Feb 16 13:57:36 crc kubenswrapper[4812]: I0216 13:57:36.639313 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-cvglp"] Feb 16 13:57:36 crc kubenswrapper[4812]: I0216 13:57:36.639698 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-cvglp" podUID="a81f17cc-32a6-4089-bf61-ea63d46b7f60" containerName="dnsmasq-dns" containerID="cri-o://a1daf64fd7e2e69a58e858950dcff8123d50b7db258267cd2dcedea00fcade73" gracePeriod=10 Feb 16 13:57:36 crc kubenswrapper[4812]: I0216 13:57:36.643957 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 13:57:36 crc kubenswrapper[4812]: I0216 13:57:36.669559 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b5mlw"] Feb 16 13:57:36 crc kubenswrapper[4812]: I0216 13:57:36.704377 4812 scope.go:117] "RemoveContainer" containerID="6e1e27fae621f2e94362e36ed877e3fe118c756e522d07bb23330406288055f5" Feb 16 13:57:36 crc kubenswrapper[4812]: I0216 13:57:36.720924 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b5mlw"] Feb 16 13:57:36 crc kubenswrapper[4812]: W0216 13:57:36.768809 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod325ea694_3236_4385_bdb2_2796db54e8a5.slice/crio-8d8bb95fb6d77117bf3af475cd4f1ac251dd4ad773cd4e10ac82d85bc215bcce WatchSource:0}: Error finding container 8d8bb95fb6d77117bf3af475cd4f1ac251dd4ad773cd4e10ac82d85bc215bcce: Status 404 returned error can't find the container with id 8d8bb95fb6d77117bf3af475cd4f1ac251dd4ad773cd4e10ac82d85bc215bcce Feb 16 13:57:36 crc kubenswrapper[4812]: I0216 13:57:36.770647 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:57:37 crc kubenswrapper[4812]: I0216 13:57:37.163779 4812 scope.go:117] "RemoveContainer" containerID="3889d92f57bfbe69edfe67b47157fb3e7f77bcb29d89c6d13aef27cb485c8dd3" Feb 16 13:57:37 crc kubenswrapper[4812]: I0216 13:57:37.194848 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e2869bf1-5702-4053-b414-f1fa8ba4f481" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.210:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 13:57:37 crc kubenswrapper[4812]: I0216 13:57:37.195064 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e2869bf1-5702-4053-b414-f1fa8ba4f481" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.210:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 13:57:37 crc kubenswrapper[4812]: I0216 13:57:37.276091 4812 scope.go:117] "RemoveContainer" containerID="54ce1aae4f76316ef2b0af968ec6c748427dd807d57db5179e5f6a5bb1b39387" Feb 16 13:57:37 crc kubenswrapper[4812]: E0216 13:57:37.278636 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54ce1aae4f76316ef2b0af968ec6c748427dd807d57db5179e5f6a5bb1b39387\": container with ID starting with 54ce1aae4f76316ef2b0af968ec6c748427dd807d57db5179e5f6a5bb1b39387 not found: ID does not exist" containerID="54ce1aae4f76316ef2b0af968ec6c748427dd807d57db5179e5f6a5bb1b39387" Feb 16 13:57:37 crc kubenswrapper[4812]: I0216 13:57:37.278697 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54ce1aae4f76316ef2b0af968ec6c748427dd807d57db5179e5f6a5bb1b39387"} err="failed to get container status \"54ce1aae4f76316ef2b0af968ec6c748427dd807d57db5179e5f6a5bb1b39387\": rpc error: code = NotFound desc = could not find container \"54ce1aae4f76316ef2b0af968ec6c748427dd807d57db5179e5f6a5bb1b39387\": container with ID starting with 54ce1aae4f76316ef2b0af968ec6c748427dd807d57db5179e5f6a5bb1b39387 not found: ID does not exist" Feb 16 13:57:37 crc kubenswrapper[4812]: I0216 13:57:37.278733 4812 scope.go:117] "RemoveContainer" containerID="6e1e27fae621f2e94362e36ed877e3fe118c756e522d07bb23330406288055f5" Feb 16 13:57:37 crc kubenswrapper[4812]: E0216 13:57:37.279408 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e1e27fae621f2e94362e36ed877e3fe118c756e522d07bb23330406288055f5\": container with ID starting with 6e1e27fae621f2e94362e36ed877e3fe118c756e522d07bb23330406288055f5 not found: ID does not exist" containerID="6e1e27fae621f2e94362e36ed877e3fe118c756e522d07bb23330406288055f5" Feb 16 13:57:37 crc kubenswrapper[4812]: I0216 13:57:37.279435 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e1e27fae621f2e94362e36ed877e3fe118c756e522d07bb23330406288055f5"} err="failed to get container status \"6e1e27fae621f2e94362e36ed877e3fe118c756e522d07bb23330406288055f5\": rpc error: code = NotFound desc = could not find container \"6e1e27fae621f2e94362e36ed877e3fe118c756e522d07bb23330406288055f5\": container with ID starting with 6e1e27fae621f2e94362e36ed877e3fe118c756e522d07bb23330406288055f5 not found: ID does not exist" Feb 16 13:57:37 crc kubenswrapper[4812]: I0216 13:57:37.279468 4812 scope.go:117] "RemoveContainer" containerID="3889d92f57bfbe69edfe67b47157fb3e7f77bcb29d89c6d13aef27cb485c8dd3" Feb 16 13:57:37 crc kubenswrapper[4812]: E0216 13:57:37.279789 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3889d92f57bfbe69edfe67b47157fb3e7f77bcb29d89c6d13aef27cb485c8dd3\": container with ID starting with 3889d92f57bfbe69edfe67b47157fb3e7f77bcb29d89c6d13aef27cb485c8dd3 not found: ID does not exist" containerID="3889d92f57bfbe69edfe67b47157fb3e7f77bcb29d89c6d13aef27cb485c8dd3" Feb 16 13:57:37 crc kubenswrapper[4812]: I0216 13:57:37.279818 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3889d92f57bfbe69edfe67b47157fb3e7f77bcb29d89c6d13aef27cb485c8dd3"} err="failed to get container status \"3889d92f57bfbe69edfe67b47157fb3e7f77bcb29d89c6d13aef27cb485c8dd3\": rpc error: code = NotFound desc = could not find container \"3889d92f57bfbe69edfe67b47157fb3e7f77bcb29d89c6d13aef27cb485c8dd3\": container with ID starting with 3889d92f57bfbe69edfe67b47157fb3e7f77bcb29d89c6d13aef27cb485c8dd3 not found: ID does not exist" Feb 16 13:57:37 crc kubenswrapper[4812]: I0216 13:57:37.612405 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-cvglp" Feb 16 13:57:37 crc kubenswrapper[4812]: I0216 13:57:37.614642 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"325ea694-3236-4385-bdb2-2796db54e8a5","Type":"ContainerStarted","Data":"ab4bfe8bc9a4bdff453dc57377e99dfaf3fa301003a5f62b766e03bdaf87c993"} Feb 16 13:57:37 crc kubenswrapper[4812]: I0216 13:57:37.614787 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"325ea694-3236-4385-bdb2-2796db54e8a5","Type":"ContainerStarted","Data":"8d8bb95fb6d77117bf3af475cd4f1ac251dd4ad773cd4e10ac82d85bc215bcce"} Feb 16 13:57:37 crc kubenswrapper[4812]: I0216 13:57:37.622791 4812 generic.go:334] "Generic (PLEG): container finished" podID="a81f17cc-32a6-4089-bf61-ea63d46b7f60" containerID="a1daf64fd7e2e69a58e858950dcff8123d50b7db258267cd2dcedea00fcade73" exitCode=0 Feb 16 13:57:37 crc kubenswrapper[4812]: I0216 13:57:37.622997 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-cvglp" event={"ID":"a81f17cc-32a6-4089-bf61-ea63d46b7f60","Type":"ContainerDied","Data":"a1daf64fd7e2e69a58e858950dcff8123d50b7db258267cd2dcedea00fcade73"} Feb 16 13:57:37 crc kubenswrapper[4812]: I0216 13:57:37.623082 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-cvglp" event={"ID":"a81f17cc-32a6-4089-bf61-ea63d46b7f60","Type":"ContainerDied","Data":"efa52ebc94cf519afe3a362337a88f0439772e0a65e19e97fb76402c532703a4"} Feb 16 13:57:37 crc kubenswrapper[4812]: I0216 13:57:37.623144 4812 scope.go:117] "RemoveContainer" containerID="a1daf64fd7e2e69a58e858950dcff8123d50b7db258267cd2dcedea00fcade73" Feb 16 13:57:37 crc kubenswrapper[4812]: I0216 13:57:37.623960 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-cvglp" Feb 16 13:57:37 crc kubenswrapper[4812]: I0216 13:57:37.685473 4812 scope.go:117] "RemoveContainer" containerID="c1efe357abcc44912a5c1e69a578e74717e0f136c9784583633b74c622eac395" Feb 16 13:57:37 crc kubenswrapper[4812]: I0216 13:57:37.722093 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdwgv\" (UniqueName: \"kubernetes.io/projected/a81f17cc-32a6-4089-bf61-ea63d46b7f60-kube-api-access-jdwgv\") pod \"a81f17cc-32a6-4089-bf61-ea63d46b7f60\" (UID: \"a81f17cc-32a6-4089-bf61-ea63d46b7f60\") " Feb 16 13:57:37 crc kubenswrapper[4812]: I0216 13:57:37.722238 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a81f17cc-32a6-4089-bf61-ea63d46b7f60-ovsdbserver-sb\") pod \"a81f17cc-32a6-4089-bf61-ea63d46b7f60\" (UID: \"a81f17cc-32a6-4089-bf61-ea63d46b7f60\") " Feb 16 13:57:37 crc kubenswrapper[4812]: I0216 13:57:37.722307 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a81f17cc-32a6-4089-bf61-ea63d46b7f60-dns-swift-storage-0\") pod \"a81f17cc-32a6-4089-bf61-ea63d46b7f60\" (UID: \"a81f17cc-32a6-4089-bf61-ea63d46b7f60\") " Feb 16 13:57:37 crc kubenswrapper[4812]: I0216 13:57:37.722786 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a81f17cc-32a6-4089-bf61-ea63d46b7f60-config\") pod \"a81f17cc-32a6-4089-bf61-ea63d46b7f60\" (UID: \"a81f17cc-32a6-4089-bf61-ea63d46b7f60\") " Feb 16 13:57:37 crc kubenswrapper[4812]: I0216 13:57:37.722892 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a81f17cc-32a6-4089-bf61-ea63d46b7f60-ovsdbserver-nb\") pod \"a81f17cc-32a6-4089-bf61-ea63d46b7f60\" (UID: \"a81f17cc-32a6-4089-bf61-ea63d46b7f60\") " Feb 16 13:57:37 crc kubenswrapper[4812]: I0216 13:57:37.722996 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a81f17cc-32a6-4089-bf61-ea63d46b7f60-dns-svc\") pod \"a81f17cc-32a6-4089-bf61-ea63d46b7f60\" (UID: \"a81f17cc-32a6-4089-bf61-ea63d46b7f60\") " Feb 16 13:57:37 crc kubenswrapper[4812]: I0216 13:57:37.791456 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a81f17cc-32a6-4089-bf61-ea63d46b7f60-kube-api-access-jdwgv" (OuterVolumeSpecName: "kube-api-access-jdwgv") pod "a81f17cc-32a6-4089-bf61-ea63d46b7f60" (UID: "a81f17cc-32a6-4089-bf61-ea63d46b7f60"). InnerVolumeSpecName "kube-api-access-jdwgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:57:37 crc kubenswrapper[4812]: I0216 13:57:37.801292 4812 scope.go:117] "RemoveContainer" containerID="a1daf64fd7e2e69a58e858950dcff8123d50b7db258267cd2dcedea00fcade73" Feb 16 13:57:37 crc kubenswrapper[4812]: E0216 13:57:37.802841 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1daf64fd7e2e69a58e858950dcff8123d50b7db258267cd2dcedea00fcade73\": container with ID starting with a1daf64fd7e2e69a58e858950dcff8123d50b7db258267cd2dcedea00fcade73 not found: ID does not exist" containerID="a1daf64fd7e2e69a58e858950dcff8123d50b7db258267cd2dcedea00fcade73" Feb 16 13:57:37 crc kubenswrapper[4812]: I0216 13:57:37.802885 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1daf64fd7e2e69a58e858950dcff8123d50b7db258267cd2dcedea00fcade73"} err="failed to get container status \"a1daf64fd7e2e69a58e858950dcff8123d50b7db258267cd2dcedea00fcade73\": rpc error: code = NotFound desc = could not find container \"a1daf64fd7e2e69a58e858950dcff8123d50b7db258267cd2dcedea00fcade73\": container with ID starting with a1daf64fd7e2e69a58e858950dcff8123d50b7db258267cd2dcedea00fcade73 not found: ID does not exist" Feb 16 13:57:37 crc kubenswrapper[4812]: I0216 13:57:37.802917 4812 scope.go:117] "RemoveContainer" containerID="c1efe357abcc44912a5c1e69a578e74717e0f136c9784583633b74c622eac395" Feb 16 13:57:37 crc kubenswrapper[4812]: E0216 13:57:37.806833 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1efe357abcc44912a5c1e69a578e74717e0f136c9784583633b74c622eac395\": container with ID starting with c1efe357abcc44912a5c1e69a578e74717e0f136c9784583633b74c622eac395 not found: ID does not exist" containerID="c1efe357abcc44912a5c1e69a578e74717e0f136c9784583633b74c622eac395" Feb 16 13:57:37 crc kubenswrapper[4812]: I0216 13:57:37.806888 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1efe357abcc44912a5c1e69a578e74717e0f136c9784583633b74c622eac395"} err="failed to get container status \"c1efe357abcc44912a5c1e69a578e74717e0f136c9784583633b74c622eac395\": rpc error: code = NotFound desc = could not find container \"c1efe357abcc44912a5c1e69a578e74717e0f136c9784583633b74c622eac395\": container with ID starting with c1efe357abcc44912a5c1e69a578e74717e0f136c9784583633b74c622eac395 not found: ID does not exist" Feb 16 13:57:37 crc kubenswrapper[4812]: I0216 13:57:37.834897 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdwgv\" (UniqueName: \"kubernetes.io/projected/a81f17cc-32a6-4089-bf61-ea63d46b7f60-kube-api-access-jdwgv\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:37 crc kubenswrapper[4812]: I0216 13:57:37.951363 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a81f17cc-32a6-4089-bf61-ea63d46b7f60-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a81f17cc-32a6-4089-bf61-ea63d46b7f60" (UID: "a81f17cc-32a6-4089-bf61-ea63d46b7f60"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:57:37 crc kubenswrapper[4812]: I0216 13:57:37.971244 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a81f17cc-32a6-4089-bf61-ea63d46b7f60-config" (OuterVolumeSpecName: "config") pod "a81f17cc-32a6-4089-bf61-ea63d46b7f60" (UID: "a81f17cc-32a6-4089-bf61-ea63d46b7f60"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:57:37 crc kubenswrapper[4812]: I0216 13:57:37.975356 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb445434-4b77-4079-a14a-c21480d7bb4e" path="/var/lib/kubelet/pods/fb445434-4b77-4079-a14a-c21480d7bb4e/volumes" Feb 16 13:57:38 crc kubenswrapper[4812]: I0216 13:57:38.007704 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a81f17cc-32a6-4089-bf61-ea63d46b7f60-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a81f17cc-32a6-4089-bf61-ea63d46b7f60" (UID: "a81f17cc-32a6-4089-bf61-ea63d46b7f60"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:57:38 crc kubenswrapper[4812]: I0216 13:57:38.010023 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a81f17cc-32a6-4089-bf61-ea63d46b7f60-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a81f17cc-32a6-4089-bf61-ea63d46b7f60" (UID: "a81f17cc-32a6-4089-bf61-ea63d46b7f60"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:57:38 crc kubenswrapper[4812]: I0216 13:57:38.018036 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a81f17cc-32a6-4089-bf61-ea63d46b7f60-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a81f17cc-32a6-4089-bf61-ea63d46b7f60" (UID: "a81f17cc-32a6-4089-bf61-ea63d46b7f60"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:57:38 crc kubenswrapper[4812]: I0216 13:57:38.040986 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a81f17cc-32a6-4089-bf61-ea63d46b7f60-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:38 crc kubenswrapper[4812]: I0216 13:57:38.041144 4812 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a81f17cc-32a6-4089-bf61-ea63d46b7f60-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:38 crc kubenswrapper[4812]: I0216 13:57:38.041226 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a81f17cc-32a6-4089-bf61-ea63d46b7f60-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:38 crc kubenswrapper[4812]: I0216 13:57:38.041292 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a81f17cc-32a6-4089-bf61-ea63d46b7f60-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:38 crc kubenswrapper[4812]: I0216 13:57:38.041380 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a81f17cc-32a6-4089-bf61-ea63d46b7f60-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:38 crc kubenswrapper[4812]: I0216 13:57:38.216713 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 16 13:57:38 crc kubenswrapper[4812]: I0216 13:57:38.344533 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-cvglp"] Feb 16 13:57:38 crc kubenswrapper[4812]: I0216 13:57:38.363891 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-cvglp"] Feb 16 13:57:38 crc kubenswrapper[4812]: I0216 13:57:38.643479 4812 generic.go:334] "Generic (PLEG): container finished" podID="9f356bcf-8719-4c4d-a9f8-b21489380dd8" containerID="828887c156eb0c0a116591a211bcfb060d33115558755f9c952c246f28a2e6c3" exitCode=0 Feb 16 13:57:38 crc kubenswrapper[4812]: I0216 13:57:38.643500 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-46bzm" event={"ID":"9f356bcf-8719-4c4d-a9f8-b21489380dd8","Type":"ContainerDied","Data":"828887c156eb0c0a116591a211bcfb060d33115558755f9c952c246f28a2e6c3"} Feb 16 13:57:38 crc kubenswrapper[4812]: I0216 13:57:38.646240 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"325ea694-3236-4385-bdb2-2796db54e8a5","Type":"ContainerStarted","Data":"c939d6cd3c6ab8a0a1dd36b246acf6dfbf4be1e90aea19c8eba3497c110e1df1"} Feb 16 13:57:38 crc kubenswrapper[4812]: I0216 13:57:38.708916 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.708884138 podStartE2EDuration="3.708884138s" podCreationTimestamp="2026-02-16 13:57:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:57:38.700825504 +0000 UTC m=+1547.765156225" watchObservedRunningTime="2026-02-16 13:57:38.708884138 +0000 UTC m=+1547.773214839" Feb 16 13:57:39 crc kubenswrapper[4812]: I0216 13:57:39.898513 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a81f17cc-32a6-4089-bf61-ea63d46b7f60" path="/var/lib/kubelet/pods/a81f17cc-32a6-4089-bf61-ea63d46b7f60/volumes" Feb 16 13:57:40 crc kubenswrapper[4812]: I0216 13:57:40.223772 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-46bzm" Feb 16 13:57:40 crc kubenswrapper[4812]: I0216 13:57:40.391887 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f356bcf-8719-4c4d-a9f8-b21489380dd8-combined-ca-bundle\") pod \"9f356bcf-8719-4c4d-a9f8-b21489380dd8\" (UID: \"9f356bcf-8719-4c4d-a9f8-b21489380dd8\") " Feb 16 13:57:40 crc kubenswrapper[4812]: I0216 13:57:40.392052 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5v7b\" (UniqueName: \"kubernetes.io/projected/9f356bcf-8719-4c4d-a9f8-b21489380dd8-kube-api-access-s5v7b\") pod \"9f356bcf-8719-4c4d-a9f8-b21489380dd8\" (UID: \"9f356bcf-8719-4c4d-a9f8-b21489380dd8\") " Feb 16 13:57:40 crc kubenswrapper[4812]: I0216 13:57:40.392077 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f356bcf-8719-4c4d-a9f8-b21489380dd8-scripts\") pod \"9f356bcf-8719-4c4d-a9f8-b21489380dd8\" (UID: \"9f356bcf-8719-4c4d-a9f8-b21489380dd8\") " Feb 16 13:57:40 crc kubenswrapper[4812]: I0216 13:57:40.392126 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f356bcf-8719-4c4d-a9f8-b21489380dd8-config-data\") pod \"9f356bcf-8719-4c4d-a9f8-b21489380dd8\" (UID: \"9f356bcf-8719-4c4d-a9f8-b21489380dd8\") " Feb 16 13:57:40 crc kubenswrapper[4812]: I0216 13:57:40.414706 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f356bcf-8719-4c4d-a9f8-b21489380dd8-scripts" (OuterVolumeSpecName: "scripts") pod "9f356bcf-8719-4c4d-a9f8-b21489380dd8" (UID: "9f356bcf-8719-4c4d-a9f8-b21489380dd8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:57:40 crc kubenswrapper[4812]: I0216 13:57:40.414748 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f356bcf-8719-4c4d-a9f8-b21489380dd8-kube-api-access-s5v7b" (OuterVolumeSpecName: "kube-api-access-s5v7b") pod "9f356bcf-8719-4c4d-a9f8-b21489380dd8" (UID: "9f356bcf-8719-4c4d-a9f8-b21489380dd8"). InnerVolumeSpecName "kube-api-access-s5v7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:57:40 crc kubenswrapper[4812]: I0216 13:57:40.441916 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f356bcf-8719-4c4d-a9f8-b21489380dd8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f356bcf-8719-4c4d-a9f8-b21489380dd8" (UID: "9f356bcf-8719-4c4d-a9f8-b21489380dd8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:57:40 crc kubenswrapper[4812]: I0216 13:57:40.482316 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f356bcf-8719-4c4d-a9f8-b21489380dd8-config-data" (OuterVolumeSpecName: "config-data") pod "9f356bcf-8719-4c4d-a9f8-b21489380dd8" (UID: "9f356bcf-8719-4c4d-a9f8-b21489380dd8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:57:40 crc kubenswrapper[4812]: I0216 13:57:40.494997 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f356bcf-8719-4c4d-a9f8-b21489380dd8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:40 crc kubenswrapper[4812]: I0216 13:57:40.495037 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5v7b\" (UniqueName: \"kubernetes.io/projected/9f356bcf-8719-4c4d-a9f8-b21489380dd8-kube-api-access-s5v7b\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:40 crc kubenswrapper[4812]: I0216 13:57:40.495052 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f356bcf-8719-4c4d-a9f8-b21489380dd8-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:40 crc kubenswrapper[4812]: I0216 13:57:40.495062 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f356bcf-8719-4c4d-a9f8-b21489380dd8-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:40 crc kubenswrapper[4812]: I0216 13:57:40.679014 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-46bzm" event={"ID":"9f356bcf-8719-4c4d-a9f8-b21489380dd8","Type":"ContainerDied","Data":"6862892d567dc0569a1d1c395a8c7c00d3df9aae59224a7b834caf800a283ca6"} Feb 16 13:57:40 crc kubenswrapper[4812]: I0216 13:57:40.679095 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6862892d567dc0569a1d1c395a8c7c00d3df9aae59224a7b834caf800a283ca6" Feb 16 13:57:40 crc kubenswrapper[4812]: I0216 13:57:40.679213 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-46bzm" Feb 16 13:57:40 crc kubenswrapper[4812]: I0216 13:57:40.892984 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:57:40 crc kubenswrapper[4812]: I0216 13:57:40.893352 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e2869bf1-5702-4053-b414-f1fa8ba4f481" containerName="nova-api-log" containerID="cri-o://1b9b2c87e012b6b09c39bf60b69e825713d9c5ff496357c4f80f67add3eb665e" gracePeriod=30 Feb 16 13:57:40 crc kubenswrapper[4812]: I0216 13:57:40.893580 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e2869bf1-5702-4053-b414-f1fa8ba4f481" containerName="nova-api-api" containerID="cri-o://d739f57aa156d8a09a1a68eee47ffe00c31d85240d6bbc24680ee30418d8fa37" gracePeriod=30 Feb 16 13:57:40 crc kubenswrapper[4812]: I0216 13:57:40.917684 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:57:40 crc kubenswrapper[4812]: I0216 13:57:40.918078 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="ebb386ce-7ac7-465f-952e-ba006a49411d" containerName="nova-scheduler-scheduler" containerID="cri-o://85d49ad791cd4ab93dbee97194bd2ff6b594c64715dc8ed1b587507e69ff64e0" gracePeriod=30 Feb 16 13:57:41 crc kubenswrapper[4812]: I0216 13:57:41.091874 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 13:57:41 crc kubenswrapper[4812]: I0216 13:57:41.092588 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 13:57:41 crc kubenswrapper[4812]: I0216 13:57:41.123249 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:57:41 crc kubenswrapper[4812]: I0216 13:57:41.698020 4812 generic.go:334] "Generic (PLEG): container finished" podID="3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd" containerID="66dba176812ac361fc65ee2a48bea40acb3b46a5825e514c2c7c4d21aea33468" exitCode=0 Feb 16 13:57:41 crc kubenswrapper[4812]: I0216 13:57:41.699004 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4snbn" event={"ID":"3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd","Type":"ContainerDied","Data":"66dba176812ac361fc65ee2a48bea40acb3b46a5825e514c2c7c4d21aea33468"} Feb 16 13:57:41 crc kubenswrapper[4812]: I0216 13:57:41.703752 4812 generic.go:334] "Generic (PLEG): container finished" podID="e2869bf1-5702-4053-b414-f1fa8ba4f481" containerID="1b9b2c87e012b6b09c39bf60b69e825713d9c5ff496357c4f80f67add3eb665e" exitCode=143 Feb 16 13:57:41 crc kubenswrapper[4812]: I0216 13:57:41.703929 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e2869bf1-5702-4053-b414-f1fa8ba4f481","Type":"ContainerDied","Data":"1b9b2c87e012b6b09c39bf60b69e825713d9c5ff496357c4f80f67add3eb665e"} Feb 16 13:57:42 crc kubenswrapper[4812]: I0216 13:57:42.715540 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="325ea694-3236-4385-bdb2-2796db54e8a5" containerName="nova-metadata-log" containerID="cri-o://ab4bfe8bc9a4bdff453dc57377e99dfaf3fa301003a5f62b766e03bdaf87c993" gracePeriod=30 Feb 16 13:57:42 crc kubenswrapper[4812]: I0216 13:57:42.716525 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="325ea694-3236-4385-bdb2-2796db54e8a5" containerName="nova-metadata-metadata" containerID="cri-o://c939d6cd3c6ab8a0a1dd36b246acf6dfbf4be1e90aea19c8eba3497c110e1df1" gracePeriod=30 Feb 16 13:57:43 crc kubenswrapper[4812]: I0216 13:57:43.370276 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4snbn" Feb 16 13:57:43 crc kubenswrapper[4812]: I0216 13:57:43.544781 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 13:57:43 crc kubenswrapper[4812]: I0216 13:57:43.545142 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="6039f662-e9ac-455c-b4da-9bcbe34e1396" containerName="kube-state-metrics" containerID="cri-o://5e066bbc0364950e73123a474b801275b367f9b18d3e19d360b9d8602a4a7f1c" gracePeriod=30 Feb 16 13:57:43 crc kubenswrapper[4812]: I0216 13:57:43.764167 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd-config-data\") pod \"3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd\" (UID: \"3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd\") " Feb 16 13:57:43 crc kubenswrapper[4812]: I0216 13:57:43.764310 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h59nv\" (UniqueName: \"kubernetes.io/projected/3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd-kube-api-access-h59nv\") pod \"3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd\" (UID: \"3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd\") " Feb 16 13:57:43 crc kubenswrapper[4812]: I0216 13:57:43.764824 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd-scripts\") pod \"3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd\" (UID: \"3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd\") " Feb 16 13:57:43 crc kubenswrapper[4812]: I0216 13:57:43.765013 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd-combined-ca-bundle\") pod \"3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd\" (UID: \"3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd\") " Feb 16 13:57:43 crc kubenswrapper[4812]: I0216 13:57:43.781522 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd-kube-api-access-h59nv" (OuterVolumeSpecName: "kube-api-access-h59nv") pod "3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd" (UID: "3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd"). InnerVolumeSpecName "kube-api-access-h59nv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:57:43 crc kubenswrapper[4812]: I0216 13:57:43.784152 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd-scripts" (OuterVolumeSpecName: "scripts") pod "3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd" (UID: "3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:57:43 crc kubenswrapper[4812]: I0216 13:57:43.802102 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 13:57:43 crc kubenswrapper[4812]: I0216 13:57:43.808855 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h59nv\" (UniqueName: \"kubernetes.io/projected/3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd-kube-api-access-h59nv\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:43 crc kubenswrapper[4812]: I0216 13:57:43.808938 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:43 crc kubenswrapper[4812]: I0216 13:57:43.821610 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4snbn" event={"ID":"3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd","Type":"ContainerDied","Data":"e9c4644b80119f609c18a297d2636a42bf16348c731903425788c942ef825a49"} Feb 16 13:57:43 crc kubenswrapper[4812]: I0216 13:57:43.821689 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9c4644b80119f609c18a297d2636a42bf16348c731903425788c942ef825a49" Feb 16 13:57:43 crc kubenswrapper[4812]: I0216 13:57:43.821847 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4snbn" Feb 16 13:57:43 crc kubenswrapper[4812]: I0216 13:57:43.836684 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd-config-data" (OuterVolumeSpecName: "config-data") pod "3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd" (UID: "3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:57:43 crc kubenswrapper[4812]: I0216 13:57:43.841789 4812 generic.go:334] "Generic (PLEG): container finished" podID="325ea694-3236-4385-bdb2-2796db54e8a5" containerID="c939d6cd3c6ab8a0a1dd36b246acf6dfbf4be1e90aea19c8eba3497c110e1df1" exitCode=0 Feb 16 13:57:43 crc kubenswrapper[4812]: I0216 13:57:43.841834 4812 generic.go:334] "Generic (PLEG): container finished" podID="325ea694-3236-4385-bdb2-2796db54e8a5" containerID="ab4bfe8bc9a4bdff453dc57377e99dfaf3fa301003a5f62b766e03bdaf87c993" exitCode=143 Feb 16 13:57:43 crc kubenswrapper[4812]: I0216 13:57:43.841860 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"325ea694-3236-4385-bdb2-2796db54e8a5","Type":"ContainerDied","Data":"c939d6cd3c6ab8a0a1dd36b246acf6dfbf4be1e90aea19c8eba3497c110e1df1"} Feb 16 13:57:43 crc kubenswrapper[4812]: I0216 13:57:43.841892 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"325ea694-3236-4385-bdb2-2796db54e8a5","Type":"ContainerDied","Data":"ab4bfe8bc9a4bdff453dc57377e99dfaf3fa301003a5f62b766e03bdaf87c993"} Feb 16 13:57:43 crc kubenswrapper[4812]: I0216 13:57:43.841903 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"325ea694-3236-4385-bdb2-2796db54e8a5","Type":"ContainerDied","Data":"8d8bb95fb6d77117bf3af475cd4f1ac251dd4ad773cd4e10ac82d85bc215bcce"} Feb 16 13:57:43 crc kubenswrapper[4812]: I0216 13:57:43.841923 4812 scope.go:117] "RemoveContainer" containerID="c939d6cd3c6ab8a0a1dd36b246acf6dfbf4be1e90aea19c8eba3497c110e1df1" Feb 16 13:57:43 crc kubenswrapper[4812]: I0216 13:57:43.842082 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 13:57:43 crc kubenswrapper[4812]: I0216 13:57:43.917649 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/325ea694-3236-4385-bdb2-2796db54e8a5-config-data\") pod \"325ea694-3236-4385-bdb2-2796db54e8a5\" (UID: \"325ea694-3236-4385-bdb2-2796db54e8a5\") " Feb 16 13:57:43 crc kubenswrapper[4812]: I0216 13:57:43.917792 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/325ea694-3236-4385-bdb2-2796db54e8a5-logs\") pod \"325ea694-3236-4385-bdb2-2796db54e8a5\" (UID: \"325ea694-3236-4385-bdb2-2796db54e8a5\") " Feb 16 13:57:43 crc kubenswrapper[4812]: I0216 13:57:43.923231 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/325ea694-3236-4385-bdb2-2796db54e8a5-combined-ca-bundle\") pod \"325ea694-3236-4385-bdb2-2796db54e8a5\" (UID: \"325ea694-3236-4385-bdb2-2796db54e8a5\") " Feb 16 13:57:43 crc kubenswrapper[4812]: I0216 13:57:43.923294 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmfdq\" (UniqueName: \"kubernetes.io/projected/325ea694-3236-4385-bdb2-2796db54e8a5-kube-api-access-nmfdq\") pod \"325ea694-3236-4385-bdb2-2796db54e8a5\" (UID: \"325ea694-3236-4385-bdb2-2796db54e8a5\") " Feb 16 13:57:43 crc kubenswrapper[4812]: I0216 13:57:43.923435 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/325ea694-3236-4385-bdb2-2796db54e8a5-nova-metadata-tls-certs\") pod \"325ea694-3236-4385-bdb2-2796db54e8a5\" (UID: \"325ea694-3236-4385-bdb2-2796db54e8a5\") " Feb 16 13:57:43 crc kubenswrapper[4812]: I0216 13:57:43.924436 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:43 crc kubenswrapper[4812]: I0216 13:57:43.925708 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/325ea694-3236-4385-bdb2-2796db54e8a5-logs" (OuterVolumeSpecName: "logs") pod "325ea694-3236-4385-bdb2-2796db54e8a5" (UID: "325ea694-3236-4385-bdb2-2796db54e8a5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:57:43 crc kubenswrapper[4812]: I0216 13:57:43.946718 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/325ea694-3236-4385-bdb2-2796db54e8a5-kube-api-access-nmfdq" (OuterVolumeSpecName: "kube-api-access-nmfdq") pod "325ea694-3236-4385-bdb2-2796db54e8a5" (UID: "325ea694-3236-4385-bdb2-2796db54e8a5"). InnerVolumeSpecName "kube-api-access-nmfdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:57:43 crc kubenswrapper[4812]: I0216 13:57:43.980936 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd" (UID: "3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.023862 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/325ea694-3236-4385-bdb2-2796db54e8a5-config-data" (OuterVolumeSpecName: "config-data") pod "325ea694-3236-4385-bdb2-2796db54e8a5" (UID: "325ea694-3236-4385-bdb2-2796db54e8a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.026642 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/325ea694-3236-4385-bdb2-2796db54e8a5-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.027178 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.027263 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmfdq\" (UniqueName: \"kubernetes.io/projected/325ea694-3236-4385-bdb2-2796db54e8a5-kube-api-access-nmfdq\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.027364 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/325ea694-3236-4385-bdb2-2796db54e8a5-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.071905 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/325ea694-3236-4385-bdb2-2796db54e8a5-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "325ea694-3236-4385-bdb2-2796db54e8a5" (UID: "325ea694-3236-4385-bdb2-2796db54e8a5"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.076954 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/325ea694-3236-4385-bdb2-2796db54e8a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "325ea694-3236-4385-bdb2-2796db54e8a5" (UID: "325ea694-3236-4385-bdb2-2796db54e8a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.130758 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/325ea694-3236-4385-bdb2-2796db54e8a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.130792 4812 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/325ea694-3236-4385-bdb2-2796db54e8a5-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.149307 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 13:57:44 crc kubenswrapper[4812]: E0216 13:57:44.150043 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f356bcf-8719-4c4d-a9f8-b21489380dd8" containerName="nova-manage" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.150070 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f356bcf-8719-4c4d-a9f8-b21489380dd8" containerName="nova-manage" Feb 16 13:57:44 crc kubenswrapper[4812]: E0216 13:57:44.150087 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd" containerName="nova-cell1-conductor-db-sync" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.150095 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd" containerName="nova-cell1-conductor-db-sync" Feb 16 13:57:44 crc kubenswrapper[4812]: E0216 13:57:44.150125 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb445434-4b77-4079-a14a-c21480d7bb4e" containerName="registry-server" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.150132 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb445434-4b77-4079-a14a-c21480d7bb4e" containerName="registry-server" Feb 16 13:57:44 crc kubenswrapper[4812]: E0216 13:57:44.150156 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a81f17cc-32a6-4089-bf61-ea63d46b7f60" containerName="init" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.150163 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a81f17cc-32a6-4089-bf61-ea63d46b7f60" containerName="init" Feb 16 13:57:44 crc kubenswrapper[4812]: E0216 13:57:44.150173 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="325ea694-3236-4385-bdb2-2796db54e8a5" containerName="nova-metadata-log" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.150195 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="325ea694-3236-4385-bdb2-2796db54e8a5" containerName="nova-metadata-log" Feb 16 13:57:44 crc kubenswrapper[4812]: E0216 13:57:44.150211 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a81f17cc-32a6-4089-bf61-ea63d46b7f60" containerName="dnsmasq-dns" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.150219 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a81f17cc-32a6-4089-bf61-ea63d46b7f60" containerName="dnsmasq-dns" Feb 16 13:57:44 crc kubenswrapper[4812]: E0216 13:57:44.150236 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb445434-4b77-4079-a14a-c21480d7bb4e" containerName="extract-content" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.150242 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb445434-4b77-4079-a14a-c21480d7bb4e" containerName="extract-content" Feb 16 13:57:44 crc kubenswrapper[4812]: E0216 13:57:44.150253 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="325ea694-3236-4385-bdb2-2796db54e8a5" containerName="nova-metadata-metadata" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.150278 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="325ea694-3236-4385-bdb2-2796db54e8a5" containerName="nova-metadata-metadata" Feb 16 13:57:44 crc kubenswrapper[4812]: E0216 13:57:44.150289 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb445434-4b77-4079-a14a-c21480d7bb4e" containerName="extract-utilities" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.150294 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb445434-4b77-4079-a14a-c21480d7bb4e" containerName="extract-utilities" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.150676 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f356bcf-8719-4c4d-a9f8-b21489380dd8" containerName="nova-manage" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.150698 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="325ea694-3236-4385-bdb2-2796db54e8a5" containerName="nova-metadata-metadata" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.150711 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd" containerName="nova-cell1-conductor-db-sync" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.150720 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb445434-4b77-4079-a14a-c21480d7bb4e" containerName="registry-server" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.150737 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a81f17cc-32a6-4089-bf61-ea63d46b7f60" containerName="dnsmasq-dns" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.150748 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="325ea694-3236-4385-bdb2-2796db54e8a5" containerName="nova-metadata-log" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.151542 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.151628 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.154903 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.187439 4812 scope.go:117] "RemoveContainer" containerID="ab4bfe8bc9a4bdff453dc57377e99dfaf3fa301003a5f62b766e03bdaf87c993" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.232821 4812 scope.go:117] "RemoveContainer" containerID="c939d6cd3c6ab8a0a1dd36b246acf6dfbf4be1e90aea19c8eba3497c110e1df1" Feb 16 13:57:44 crc kubenswrapper[4812]: E0216 13:57:44.240541 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c939d6cd3c6ab8a0a1dd36b246acf6dfbf4be1e90aea19c8eba3497c110e1df1\": container with ID starting with c939d6cd3c6ab8a0a1dd36b246acf6dfbf4be1e90aea19c8eba3497c110e1df1 not found: ID does not exist" containerID="c939d6cd3c6ab8a0a1dd36b246acf6dfbf4be1e90aea19c8eba3497c110e1df1" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.240778 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c939d6cd3c6ab8a0a1dd36b246acf6dfbf4be1e90aea19c8eba3497c110e1df1"} err="failed to get container status \"c939d6cd3c6ab8a0a1dd36b246acf6dfbf4be1e90aea19c8eba3497c110e1df1\": rpc error: code = NotFound desc = could not find container \"c939d6cd3c6ab8a0a1dd36b246acf6dfbf4be1e90aea19c8eba3497c110e1df1\": container with ID starting with c939d6cd3c6ab8a0a1dd36b246acf6dfbf4be1e90aea19c8eba3497c110e1df1 not found: ID does not exist" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.240810 4812 scope.go:117] "RemoveContainer" containerID="ab4bfe8bc9a4bdff453dc57377e99dfaf3fa301003a5f62b766e03bdaf87c993" Feb 16 13:57:44 crc kubenswrapper[4812]: E0216 13:57:44.243371 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab4bfe8bc9a4bdff453dc57377e99dfaf3fa301003a5f62b766e03bdaf87c993\": container with ID starting with ab4bfe8bc9a4bdff453dc57377e99dfaf3fa301003a5f62b766e03bdaf87c993 not found: ID does not exist" containerID="ab4bfe8bc9a4bdff453dc57377e99dfaf3fa301003a5f62b766e03bdaf87c993" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.243455 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab4bfe8bc9a4bdff453dc57377e99dfaf3fa301003a5f62b766e03bdaf87c993"} err="failed to get container status \"ab4bfe8bc9a4bdff453dc57377e99dfaf3fa301003a5f62b766e03bdaf87c993\": rpc error: code = NotFound desc = could not find container \"ab4bfe8bc9a4bdff453dc57377e99dfaf3fa301003a5f62b766e03bdaf87c993\": container with ID starting with ab4bfe8bc9a4bdff453dc57377e99dfaf3fa301003a5f62b766e03bdaf87c993 not found: ID does not exist" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.243499 4812 scope.go:117] "RemoveContainer" containerID="c939d6cd3c6ab8a0a1dd36b246acf6dfbf4be1e90aea19c8eba3497c110e1df1" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.247955 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c939d6cd3c6ab8a0a1dd36b246acf6dfbf4be1e90aea19c8eba3497c110e1df1"} err="failed to get container status \"c939d6cd3c6ab8a0a1dd36b246acf6dfbf4be1e90aea19c8eba3497c110e1df1\": rpc error: code = NotFound desc = could not find container \"c939d6cd3c6ab8a0a1dd36b246acf6dfbf4be1e90aea19c8eba3497c110e1df1\": container with ID starting with c939d6cd3c6ab8a0a1dd36b246acf6dfbf4be1e90aea19c8eba3497c110e1df1 not found: ID does not exist" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.362554 4812 scope.go:117] "RemoveContainer" containerID="ab4bfe8bc9a4bdff453dc57377e99dfaf3fa301003a5f62b766e03bdaf87c993" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.249270 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.362872 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.362901 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.365524 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.267325 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31ddd42b-256a-4ab3-a348-bfa32b61cd2e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"31ddd42b-256a-4ab3-a348-bfa32b61cd2e\") " pod="openstack/nova-cell1-conductor-0" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.365852 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.366356 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31ddd42b-256a-4ab3-a348-bfa32b61cd2e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"31ddd42b-256a-4ab3-a348-bfa32b61cd2e\") " pod="openstack/nova-cell1-conductor-0" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.366739 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb5lw\" (UniqueName: \"kubernetes.io/projected/31ddd42b-256a-4ab3-a348-bfa32b61cd2e-kube-api-access-mb5lw\") pod \"nova-cell1-conductor-0\" (UID: \"31ddd42b-256a-4ab3-a348-bfa32b61cd2e\") " pod="openstack/nova-cell1-conductor-0" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.367425 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab4bfe8bc9a4bdff453dc57377e99dfaf3fa301003a5f62b766e03bdaf87c993"} err="failed to get container status \"ab4bfe8bc9a4bdff453dc57377e99dfaf3fa301003a5f62b766e03bdaf87c993\": rpc error: code = NotFound desc = could not find container \"ab4bfe8bc9a4bdff453dc57377e99dfaf3fa301003a5f62b766e03bdaf87c993\": container with ID starting with ab4bfe8bc9a4bdff453dc57377e99dfaf3fa301003a5f62b766e03bdaf87c993 not found: ID does not exist" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.373285 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.373772 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.469877 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad25a505-b306-47ee-92dc-19b8635d455b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ad25a505-b306-47ee-92dc-19b8635d455b\") " pod="openstack/nova-metadata-0" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.469983 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad25a505-b306-47ee-92dc-19b8635d455b-config-data\") pod \"nova-metadata-0\" (UID: \"ad25a505-b306-47ee-92dc-19b8635d455b\") " pod="openstack/nova-metadata-0" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.470360 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad25a505-b306-47ee-92dc-19b8635d455b-logs\") pod \"nova-metadata-0\" (UID: \"ad25a505-b306-47ee-92dc-19b8635d455b\") " pod="openstack/nova-metadata-0" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.470631 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31ddd42b-256a-4ab3-a348-bfa32b61cd2e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"31ddd42b-256a-4ab3-a348-bfa32b61cd2e\") " pod="openstack/nova-cell1-conductor-0" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.470763 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad25a505-b306-47ee-92dc-19b8635d455b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ad25a505-b306-47ee-92dc-19b8635d455b\") " pod="openstack/nova-metadata-0" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.471227 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb5lw\" (UniqueName: \"kubernetes.io/projected/31ddd42b-256a-4ab3-a348-bfa32b61cd2e-kube-api-access-mb5lw\") pod \"nova-cell1-conductor-0\" (UID: \"31ddd42b-256a-4ab3-a348-bfa32b61cd2e\") " pod="openstack/nova-cell1-conductor-0" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.471480 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v4h5\" (UniqueName: \"kubernetes.io/projected/ad25a505-b306-47ee-92dc-19b8635d455b-kube-api-access-6v4h5\") pod \"nova-metadata-0\" (UID: \"ad25a505-b306-47ee-92dc-19b8635d455b\") " pod="openstack/nova-metadata-0" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.471581 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31ddd42b-256a-4ab3-a348-bfa32b61cd2e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"31ddd42b-256a-4ab3-a348-bfa32b61cd2e\") " pod="openstack/nova-cell1-conductor-0" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.493056 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31ddd42b-256a-4ab3-a348-bfa32b61cd2e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"31ddd42b-256a-4ab3-a348-bfa32b61cd2e\") " pod="openstack/nova-cell1-conductor-0" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.493325 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31ddd42b-256a-4ab3-a348-bfa32b61cd2e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"31ddd42b-256a-4ab3-a348-bfa32b61cd2e\") " pod="openstack/nova-cell1-conductor-0" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.503337 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb5lw\" (UniqueName: \"kubernetes.io/projected/31ddd42b-256a-4ab3-a348-bfa32b61cd2e-kube-api-access-mb5lw\") pod \"nova-cell1-conductor-0\" (UID: \"31ddd42b-256a-4ab3-a348-bfa32b61cd2e\") " pod="openstack/nova-cell1-conductor-0" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.549131 4812 patch_prober.go:28] interesting pod/machine-config-daemon-c6mn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.549225 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.574339 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad25a505-b306-47ee-92dc-19b8635d455b-config-data\") pod \"nova-metadata-0\" (UID: \"ad25a505-b306-47ee-92dc-19b8635d455b\") " pod="openstack/nova-metadata-0" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.574638 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad25a505-b306-47ee-92dc-19b8635d455b-logs\") pod \"nova-metadata-0\" (UID: \"ad25a505-b306-47ee-92dc-19b8635d455b\") " pod="openstack/nova-metadata-0" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.574713 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad25a505-b306-47ee-92dc-19b8635d455b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ad25a505-b306-47ee-92dc-19b8635d455b\") " pod="openstack/nova-metadata-0" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.574830 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v4h5\" (UniqueName: \"kubernetes.io/projected/ad25a505-b306-47ee-92dc-19b8635d455b-kube-api-access-6v4h5\") pod \"nova-metadata-0\" (UID: \"ad25a505-b306-47ee-92dc-19b8635d455b\") " pod="openstack/nova-metadata-0" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.574866 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad25a505-b306-47ee-92dc-19b8635d455b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ad25a505-b306-47ee-92dc-19b8635d455b\") " pod="openstack/nova-metadata-0" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.575879 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad25a505-b306-47ee-92dc-19b8635d455b-logs\") pod \"nova-metadata-0\" (UID: \"ad25a505-b306-47ee-92dc-19b8635d455b\") " pod="openstack/nova-metadata-0" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.581888 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad25a505-b306-47ee-92dc-19b8635d455b-config-data\") pod \"nova-metadata-0\" (UID: \"ad25a505-b306-47ee-92dc-19b8635d455b\") " pod="openstack/nova-metadata-0" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.591822 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad25a505-b306-47ee-92dc-19b8635d455b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ad25a505-b306-47ee-92dc-19b8635d455b\") " pod="openstack/nova-metadata-0" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.591843 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad25a505-b306-47ee-92dc-19b8635d455b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ad25a505-b306-47ee-92dc-19b8635d455b\") " pod="openstack/nova-metadata-0" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.593771 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.606068 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v4h5\" (UniqueName: \"kubernetes.io/projected/ad25a505-b306-47ee-92dc-19b8635d455b-kube-api-access-6v4h5\") pod \"nova-metadata-0\" (UID: \"ad25a505-b306-47ee-92dc-19b8635d455b\") " pod="openstack/nova-metadata-0" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.683118 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9v9nt\" (UniqueName: \"kubernetes.io/projected/6039f662-e9ac-455c-b4da-9bcbe34e1396-kube-api-access-9v9nt\") pod \"6039f662-e9ac-455c-b4da-9bcbe34e1396\" (UID: \"6039f662-e9ac-455c-b4da-9bcbe34e1396\") " Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.688072 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6039f662-e9ac-455c-b4da-9bcbe34e1396-kube-api-access-9v9nt" (OuterVolumeSpecName: "kube-api-access-9v9nt") pod "6039f662-e9ac-455c-b4da-9bcbe34e1396" (UID: "6039f662-e9ac-455c-b4da-9bcbe34e1396"). InnerVolumeSpecName "kube-api-access-9v9nt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.741951 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.789402 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9v9nt\" (UniqueName: \"kubernetes.io/projected/6039f662-e9ac-455c-b4da-9bcbe34e1396-kube-api-access-9v9nt\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.798923 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.896334 4812 generic.go:334] "Generic (PLEG): container finished" podID="e2869bf1-5702-4053-b414-f1fa8ba4f481" containerID="d739f57aa156d8a09a1a68eee47ffe00c31d85240d6bbc24680ee30418d8fa37" exitCode=0 Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.896566 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e2869bf1-5702-4053-b414-f1fa8ba4f481","Type":"ContainerDied","Data":"d739f57aa156d8a09a1a68eee47ffe00c31d85240d6bbc24680ee30418d8fa37"} Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.902663 4812 generic.go:334] "Generic (PLEG): container finished" podID="6039f662-e9ac-455c-b4da-9bcbe34e1396" containerID="5e066bbc0364950e73123a474b801275b367f9b18d3e19d360b9d8602a4a7f1c" exitCode=2 Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.902840 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.902864 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6039f662-e9ac-455c-b4da-9bcbe34e1396","Type":"ContainerDied","Data":"5e066bbc0364950e73123a474b801275b367f9b18d3e19d360b9d8602a4a7f1c"} Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.902917 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6039f662-e9ac-455c-b4da-9bcbe34e1396","Type":"ContainerDied","Data":"04e5804e0a98d60afe9cb7e7e16d8f1527ea077e9314bc55bf83714bc04c0a81"} Feb 16 13:57:44 crc kubenswrapper[4812]: I0216 13:57:44.902942 4812 scope.go:117] "RemoveContainer" containerID="5e066bbc0364950e73123a474b801275b367f9b18d3e19d360b9d8602a4a7f1c" Feb 16 13:57:45 crc kubenswrapper[4812]: I0216 13:57:45.060795 4812 scope.go:117] "RemoveContainer" containerID="5e066bbc0364950e73123a474b801275b367f9b18d3e19d360b9d8602a4a7f1c" Feb 16 13:57:45 crc kubenswrapper[4812]: E0216 13:57:45.067089 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e066bbc0364950e73123a474b801275b367f9b18d3e19d360b9d8602a4a7f1c\": container with ID starting with 5e066bbc0364950e73123a474b801275b367f9b18d3e19d360b9d8602a4a7f1c not found: ID does not exist" containerID="5e066bbc0364950e73123a474b801275b367f9b18d3e19d360b9d8602a4a7f1c" Feb 16 13:57:45 crc kubenswrapper[4812]: I0216 13:57:45.067179 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e066bbc0364950e73123a474b801275b367f9b18d3e19d360b9d8602a4a7f1c"} err="failed to get container status \"5e066bbc0364950e73123a474b801275b367f9b18d3e19d360b9d8602a4a7f1c\": rpc error: code = NotFound desc = could not find container \"5e066bbc0364950e73123a474b801275b367f9b18d3e19d360b9d8602a4a7f1c\": container with ID starting with 5e066bbc0364950e73123a474b801275b367f9b18d3e19d360b9d8602a4a7f1c not found: ID does not exist" Feb 16 13:57:45 crc kubenswrapper[4812]: I0216 13:57:45.158079 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 13:57:45 crc kubenswrapper[4812]: I0216 13:57:45.226668 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 13:57:45 crc kubenswrapper[4812]: I0216 13:57:45.274549 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 13:57:45 crc kubenswrapper[4812]: E0216 13:57:45.275545 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6039f662-e9ac-455c-b4da-9bcbe34e1396" containerName="kube-state-metrics" Feb 16 13:57:45 crc kubenswrapper[4812]: I0216 13:57:45.275569 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="6039f662-e9ac-455c-b4da-9bcbe34e1396" containerName="kube-state-metrics" Feb 16 13:57:45 crc kubenswrapper[4812]: I0216 13:57:45.275844 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="6039f662-e9ac-455c-b4da-9bcbe34e1396" containerName="kube-state-metrics" Feb 16 13:57:45 crc kubenswrapper[4812]: I0216 13:57:45.277077 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 13:57:45 crc kubenswrapper[4812]: I0216 13:57:45.287288 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 16 13:57:45 crc kubenswrapper[4812]: I0216 13:57:45.295210 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 16 13:57:45 crc kubenswrapper[4812]: I0216 13:57:45.384543 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 13:57:45 crc kubenswrapper[4812]: I0216 13:57:45.480296 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-846vx\" (UniqueName: \"kubernetes.io/projected/f508573d-dccc-4922-9173-48c8c9a8e134-kube-api-access-846vx\") pod \"kube-state-metrics-0\" (UID: \"f508573d-dccc-4922-9173-48c8c9a8e134\") " pod="openstack/kube-state-metrics-0" Feb 16 13:57:45 crc kubenswrapper[4812]: I0216 13:57:45.480607 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f508573d-dccc-4922-9173-48c8c9a8e134-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f508573d-dccc-4922-9173-48c8c9a8e134\") " pod="openstack/kube-state-metrics-0" Feb 16 13:57:45 crc kubenswrapper[4812]: I0216 13:57:45.480689 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f508573d-dccc-4922-9173-48c8c9a8e134-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f508573d-dccc-4922-9173-48c8c9a8e134\") " pod="openstack/kube-state-metrics-0" Feb 16 13:57:45 crc kubenswrapper[4812]: I0216 13:57:45.480857 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f508573d-dccc-4922-9173-48c8c9a8e134-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f508573d-dccc-4922-9173-48c8c9a8e134\") " pod="openstack/kube-state-metrics-0" Feb 16 13:57:45 crc kubenswrapper[4812]: I0216 13:57:45.584712 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-846vx\" (UniqueName: \"kubernetes.io/projected/f508573d-dccc-4922-9173-48c8c9a8e134-kube-api-access-846vx\") pod \"kube-state-metrics-0\" (UID: \"f508573d-dccc-4922-9173-48c8c9a8e134\") " pod="openstack/kube-state-metrics-0" Feb 16 13:57:45 crc kubenswrapper[4812]: I0216 13:57:45.584930 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f508573d-dccc-4922-9173-48c8c9a8e134-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f508573d-dccc-4922-9173-48c8c9a8e134\") " pod="openstack/kube-state-metrics-0" Feb 16 13:57:45 crc kubenswrapper[4812]: I0216 13:57:45.584981 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f508573d-dccc-4922-9173-48c8c9a8e134-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f508573d-dccc-4922-9173-48c8c9a8e134\") " pod="openstack/kube-state-metrics-0" Feb 16 13:57:45 crc kubenswrapper[4812]: I0216 13:57:45.585054 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f508573d-dccc-4922-9173-48c8c9a8e134-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f508573d-dccc-4922-9173-48c8c9a8e134\") " pod="openstack/kube-state-metrics-0" Feb 16 13:57:45 crc kubenswrapper[4812]: E0216 13:57:45.601587 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 85d49ad791cd4ab93dbee97194bd2ff6b594c64715dc8ed1b587507e69ff64e0 is running failed: container process not found" containerID="85d49ad791cd4ab93dbee97194bd2ff6b594c64715dc8ed1b587507e69ff64e0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 13:57:45 crc kubenswrapper[4812]: I0216 13:57:45.603048 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f508573d-dccc-4922-9173-48c8c9a8e134-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f508573d-dccc-4922-9173-48c8c9a8e134\") " pod="openstack/kube-state-metrics-0" Feb 16 13:57:45 crc kubenswrapper[4812]: E0216 13:57:45.603406 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 85d49ad791cd4ab93dbee97194bd2ff6b594c64715dc8ed1b587507e69ff64e0 is running failed: container process not found" containerID="85d49ad791cd4ab93dbee97194bd2ff6b594c64715dc8ed1b587507e69ff64e0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 13:57:45 crc kubenswrapper[4812]: E0216 13:57:45.605374 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 85d49ad791cd4ab93dbee97194bd2ff6b594c64715dc8ed1b587507e69ff64e0 is running failed: container process not found" containerID="85d49ad791cd4ab93dbee97194bd2ff6b594c64715dc8ed1b587507e69ff64e0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 13:57:45 crc kubenswrapper[4812]: E0216 13:57:45.605540 4812 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 85d49ad791cd4ab93dbee97194bd2ff6b594c64715dc8ed1b587507e69ff64e0 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="ebb386ce-7ac7-465f-952e-ba006a49411d" containerName="nova-scheduler-scheduler" Feb 16 13:57:45 crc kubenswrapper[4812]: I0216 13:57:45.605710 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 13:57:45 crc kubenswrapper[4812]: I0216 13:57:45.606593 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f508573d-dccc-4922-9173-48c8c9a8e134-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f508573d-dccc-4922-9173-48c8c9a8e134\") " pod="openstack/kube-state-metrics-0" Feb 16 13:57:45 crc kubenswrapper[4812]: I0216 13:57:45.606747 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f508573d-dccc-4922-9173-48c8c9a8e134-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f508573d-dccc-4922-9173-48c8c9a8e134\") " pod="openstack/kube-state-metrics-0" Feb 16 13:57:45 crc kubenswrapper[4812]: I0216 13:57:45.628901 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-846vx\" (UniqueName: \"kubernetes.io/projected/f508573d-dccc-4922-9173-48c8c9a8e134-kube-api-access-846vx\") pod \"kube-state-metrics-0\" (UID: \"f508573d-dccc-4922-9173-48c8c9a8e134\") " pod="openstack/kube-state-metrics-0" Feb 16 13:57:45 crc kubenswrapper[4812]: I0216 13:57:45.686668 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78j7d\" (UniqueName: \"kubernetes.io/projected/e2869bf1-5702-4053-b414-f1fa8ba4f481-kube-api-access-78j7d\") pod \"e2869bf1-5702-4053-b414-f1fa8ba4f481\" (UID: \"e2869bf1-5702-4053-b414-f1fa8ba4f481\") " Feb 16 13:57:45 crc kubenswrapper[4812]: I0216 13:57:45.686764 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2869bf1-5702-4053-b414-f1fa8ba4f481-config-data\") pod \"e2869bf1-5702-4053-b414-f1fa8ba4f481\" (UID: \"e2869bf1-5702-4053-b414-f1fa8ba4f481\") " Feb 16 13:57:45 crc kubenswrapper[4812]: I0216 13:57:45.686999 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2869bf1-5702-4053-b414-f1fa8ba4f481-combined-ca-bundle\") pod \"e2869bf1-5702-4053-b414-f1fa8ba4f481\" (UID: \"e2869bf1-5702-4053-b414-f1fa8ba4f481\") " Feb 16 13:57:45 crc kubenswrapper[4812]: I0216 13:57:45.687050 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2869bf1-5702-4053-b414-f1fa8ba4f481-logs\") pod \"e2869bf1-5702-4053-b414-f1fa8ba4f481\" (UID: \"e2869bf1-5702-4053-b414-f1fa8ba4f481\") " Feb 16 13:57:45 crc kubenswrapper[4812]: I0216 13:57:45.691019 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2869bf1-5702-4053-b414-f1fa8ba4f481-logs" (OuterVolumeSpecName: "logs") pod "e2869bf1-5702-4053-b414-f1fa8ba4f481" (UID: "e2869bf1-5702-4053-b414-f1fa8ba4f481"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:57:45 crc kubenswrapper[4812]: I0216 13:57:45.693612 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2869bf1-5702-4053-b414-f1fa8ba4f481-kube-api-access-78j7d" (OuterVolumeSpecName: "kube-api-access-78j7d") pod "e2869bf1-5702-4053-b414-f1fa8ba4f481" (UID: "e2869bf1-5702-4053-b414-f1fa8ba4f481"). InnerVolumeSpecName "kube-api-access-78j7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:57:45 crc kubenswrapper[4812]: I0216 13:57:45.732488 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:57:45 crc kubenswrapper[4812]: W0216 13:57:45.734373 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad25a505_b306_47ee_92dc_19b8635d455b.slice/crio-81893a0981942299fc5d5719d066ad690c026c6ddc21fa378ee51b6cb2ee6b2e WatchSource:0}: Error finding container 81893a0981942299fc5d5719d066ad690c026c6ddc21fa378ee51b6cb2ee6b2e: Status 404 returned error can't find the container with id 81893a0981942299fc5d5719d066ad690c026c6ddc21fa378ee51b6cb2ee6b2e Feb 16 13:57:45 crc kubenswrapper[4812]: I0216 13:57:45.745773 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2869bf1-5702-4053-b414-f1fa8ba4f481-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2869bf1-5702-4053-b414-f1fa8ba4f481" (UID: "e2869bf1-5702-4053-b414-f1fa8ba4f481"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:57:45 crc kubenswrapper[4812]: I0216 13:57:45.749008 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2869bf1-5702-4053-b414-f1fa8ba4f481-config-data" (OuterVolumeSpecName: "config-data") pod "e2869bf1-5702-4053-b414-f1fa8ba4f481" (UID: "e2869bf1-5702-4053-b414-f1fa8ba4f481"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:57:45 crc kubenswrapper[4812]: I0216 13:57:45.792261 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2869bf1-5702-4053-b414-f1fa8ba4f481-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:45 crc kubenswrapper[4812]: I0216 13:57:45.792310 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2869bf1-5702-4053-b414-f1fa8ba4f481-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:45 crc kubenswrapper[4812]: I0216 13:57:45.792323 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78j7d\" (UniqueName: \"kubernetes.io/projected/e2869bf1-5702-4053-b414-f1fa8ba4f481-kube-api-access-78j7d\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:45 crc kubenswrapper[4812]: I0216 13:57:45.792340 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2869bf1-5702-4053-b414-f1fa8ba4f481-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.005872 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.094015 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="325ea694-3236-4385-bdb2-2796db54e8a5" path="/var/lib/kubelet/pods/325ea694-3236-4385-bdb2-2796db54e8a5/volumes" Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.094761 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6039f662-e9ac-455c-b4da-9bcbe34e1396" path="/var/lib/kubelet/pods/6039f662-e9ac-455c-b4da-9bcbe34e1396/volumes" Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.107889 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e2869bf1-5702-4053-b414-f1fa8ba4f481","Type":"ContainerDied","Data":"b450a82a7848b296845287788b9ae8a6444d49b6015b86367e6edac25f3e0428"} Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.107957 4812 scope.go:117] "RemoveContainer" containerID="d739f57aa156d8a09a1a68eee47ffe00c31d85240d6bbc24680ee30418d8fa37" Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.108160 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.126884 4812 generic.go:334] "Generic (PLEG): container finished" podID="ebb386ce-7ac7-465f-952e-ba006a49411d" containerID="85d49ad791cd4ab93dbee97194bd2ff6b594c64715dc8ed1b587507e69ff64e0" exitCode=0 Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.126988 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ebb386ce-7ac7-465f-952e-ba006a49411d","Type":"ContainerDied","Data":"85d49ad791cd4ab93dbee97194bd2ff6b594c64715dc8ed1b587507e69ff64e0"} Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.129973 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ad25a505-b306-47ee-92dc-19b8635d455b","Type":"ContainerStarted","Data":"81893a0981942299fc5d5719d066ad690c026c6ddc21fa378ee51b6cb2ee6b2e"} Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.179205 4812 scope.go:117] "RemoveContainer" containerID="1b9b2c87e012b6b09c39bf60b69e825713d9c5ff496357c4f80f67add3eb665e" Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.219972 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.252427 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.295416 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 13:57:46 crc kubenswrapper[4812]: E0216 13:57:46.295954 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2869bf1-5702-4053-b414-f1fa8ba4f481" containerName="nova-api-log" Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.295970 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2869bf1-5702-4053-b414-f1fa8ba4f481" containerName="nova-api-log" Feb 16 13:57:46 crc kubenswrapper[4812]: E0216 13:57:46.295991 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2869bf1-5702-4053-b414-f1fa8ba4f481" containerName="nova-api-api" Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.295997 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2869bf1-5702-4053-b414-f1fa8ba4f481" containerName="nova-api-api" Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.296202 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2869bf1-5702-4053-b414-f1fa8ba4f481" containerName="nova-api-api" Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.296223 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2869bf1-5702-4053-b414-f1fa8ba4f481" containerName="nova-api-log" Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.297390 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.303155 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.319108 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.375328 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.423113 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7af325ac-63cf-45be-8fd3-564597076853-config-data\") pod \"nova-api-0\" (UID: \"7af325ac-63cf-45be-8fd3-564597076853\") " pod="openstack/nova-api-0" Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.423222 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7af325ac-63cf-45be-8fd3-564597076853-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7af325ac-63cf-45be-8fd3-564597076853\") " pod="openstack/nova-api-0" Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.423332 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n22sw\" (UniqueName: \"kubernetes.io/projected/7af325ac-63cf-45be-8fd3-564597076853-kube-api-access-n22sw\") pod \"nova-api-0\" (UID: \"7af325ac-63cf-45be-8fd3-564597076853\") " pod="openstack/nova-api-0" Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.425660 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7af325ac-63cf-45be-8fd3-564597076853-logs\") pod \"nova-api-0\" (UID: \"7af325ac-63cf-45be-8fd3-564597076853\") " pod="openstack/nova-api-0" Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.527608 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7af325ac-63cf-45be-8fd3-564597076853-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7af325ac-63cf-45be-8fd3-564597076853\") " pod="openstack/nova-api-0" Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.527750 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n22sw\" (UniqueName: \"kubernetes.io/projected/7af325ac-63cf-45be-8fd3-564597076853-kube-api-access-n22sw\") pod \"nova-api-0\" (UID: \"7af325ac-63cf-45be-8fd3-564597076853\") " pod="openstack/nova-api-0" Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.527808 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7af325ac-63cf-45be-8fd3-564597076853-logs\") pod \"nova-api-0\" (UID: \"7af325ac-63cf-45be-8fd3-564597076853\") " pod="openstack/nova-api-0" Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.527890 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7af325ac-63cf-45be-8fd3-564597076853-config-data\") pod \"nova-api-0\" (UID: \"7af325ac-63cf-45be-8fd3-564597076853\") " pod="openstack/nova-api-0" Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.529125 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7af325ac-63cf-45be-8fd3-564597076853-logs\") pod \"nova-api-0\" (UID: \"7af325ac-63cf-45be-8fd3-564597076853\") " pod="openstack/nova-api-0" Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.534067 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7af325ac-63cf-45be-8fd3-564597076853-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7af325ac-63cf-45be-8fd3-564597076853\") " pod="openstack/nova-api-0" Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.534254 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7af325ac-63cf-45be-8fd3-564597076853-config-data\") pod \"nova-api-0\" (UID: \"7af325ac-63cf-45be-8fd3-564597076853\") " pod="openstack/nova-api-0" Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.551535 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n22sw\" (UniqueName: \"kubernetes.io/projected/7af325ac-63cf-45be-8fd3-564597076853-kube-api-access-n22sw\") pod \"nova-api-0\" (UID: \"7af325ac-63cf-45be-8fd3-564597076853\") " pod="openstack/nova-api-0" Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.624762 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 13:57:46 crc kubenswrapper[4812]: W0216 13:57:46.631997 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf508573d_dccc_4922_9173_48c8c9a8e134.slice/crio-59c607999dc50c099a3dbf9a5d66cbc25ab0dce5456f05f3095b4a270e5c4ad2 WatchSource:0}: Error finding container 59c607999dc50c099a3dbf9a5d66cbc25ab0dce5456f05f3095b4a270e5c4ad2: Status 404 returned error can't find the container with id 59c607999dc50c099a3dbf9a5d66cbc25ab0dce5456f05f3095b4a270e5c4ad2 Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.637875 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.666003 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.845663 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qslb2\" (UniqueName: \"kubernetes.io/projected/ebb386ce-7ac7-465f-952e-ba006a49411d-kube-api-access-qslb2\") pod \"ebb386ce-7ac7-465f-952e-ba006a49411d\" (UID: \"ebb386ce-7ac7-465f-952e-ba006a49411d\") " Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.845898 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebb386ce-7ac7-465f-952e-ba006a49411d-config-data\") pod \"ebb386ce-7ac7-465f-952e-ba006a49411d\" (UID: \"ebb386ce-7ac7-465f-952e-ba006a49411d\") " Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.846078 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebb386ce-7ac7-465f-952e-ba006a49411d-combined-ca-bundle\") pod \"ebb386ce-7ac7-465f-952e-ba006a49411d\" (UID: \"ebb386ce-7ac7-465f-952e-ba006a49411d\") " Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.864383 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebb386ce-7ac7-465f-952e-ba006a49411d-kube-api-access-qslb2" (OuterVolumeSpecName: "kube-api-access-qslb2") pod "ebb386ce-7ac7-465f-952e-ba006a49411d" (UID: "ebb386ce-7ac7-465f-952e-ba006a49411d"). InnerVolumeSpecName "kube-api-access-qslb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.923760 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebb386ce-7ac7-465f-952e-ba006a49411d-config-data" (OuterVolumeSpecName: "config-data") pod "ebb386ce-7ac7-465f-952e-ba006a49411d" (UID: "ebb386ce-7ac7-465f-952e-ba006a49411d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.940476 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebb386ce-7ac7-465f-952e-ba006a49411d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ebb386ce-7ac7-465f-952e-ba006a49411d" (UID: "ebb386ce-7ac7-465f-952e-ba006a49411d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.950504 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qslb2\" (UniqueName: \"kubernetes.io/projected/ebb386ce-7ac7-465f-952e-ba006a49411d-kube-api-access-qslb2\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.950550 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebb386ce-7ac7-465f-952e-ba006a49411d-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:46 crc kubenswrapper[4812]: I0216 13:57:46.950566 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebb386ce-7ac7-465f-952e-ba006a49411d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:47 crc kubenswrapper[4812]: I0216 13:57:47.144285 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f508573d-dccc-4922-9173-48c8c9a8e134","Type":"ContainerStarted","Data":"59c607999dc50c099a3dbf9a5d66cbc25ab0dce5456f05f3095b4a270e5c4ad2"} Feb 16 13:57:47 crc kubenswrapper[4812]: I0216 13:57:47.149407 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"31ddd42b-256a-4ab3-a348-bfa32b61cd2e","Type":"ContainerStarted","Data":"cb359cf7997511c968aa74acd48214a9c5f25d311a1813526243f56bef93d28f"} Feb 16 13:57:47 crc kubenswrapper[4812]: I0216 13:57:47.149843 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"31ddd42b-256a-4ab3-a348-bfa32b61cd2e","Type":"ContainerStarted","Data":"37f99574f1f753d77b8b29482dc034ca8ead256f62bad8624c40c285b6781aab"} Feb 16 13:57:47 crc kubenswrapper[4812]: I0216 13:57:47.150048 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 16 13:57:47 crc kubenswrapper[4812]: I0216 13:57:47.158503 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ebb386ce-7ac7-465f-952e-ba006a49411d","Type":"ContainerDied","Data":"241ed3b6131904c64ab08dcceabf812083d5eb182aba1bb53a9e1bf74276494b"} Feb 16 13:57:47 crc kubenswrapper[4812]: I0216 13:57:47.158586 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 13:57:47 crc kubenswrapper[4812]: I0216 13:57:47.158625 4812 scope.go:117] "RemoveContainer" containerID="85d49ad791cd4ab93dbee97194bd2ff6b594c64715dc8ed1b587507e69ff64e0" Feb 16 13:57:47 crc kubenswrapper[4812]: I0216 13:57:47.162513 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ad25a505-b306-47ee-92dc-19b8635d455b","Type":"ContainerStarted","Data":"5e98e4a545cb2ffe6fcfb23c9b6bb86f3bb9674d1fb7c6bcfe38d80ddb565d57"} Feb 16 13:57:47 crc kubenswrapper[4812]: I0216 13:57:47.162584 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ad25a505-b306-47ee-92dc-19b8635d455b","Type":"ContainerStarted","Data":"745d592566322b077db01ae41e9d70c3784bcc9fe8836eb667517adc56012d7d"} Feb 16 13:57:47 crc kubenswrapper[4812]: I0216 13:57:47.187101 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=4.187062882 podStartE2EDuration="4.187062882s" podCreationTimestamp="2026-02-16 13:57:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:57:47.183298533 +0000 UTC m=+1556.247629254" watchObservedRunningTime="2026-02-16 13:57:47.187062882 +0000 UTC m=+1556.251393593" Feb 16 13:57:47 crc kubenswrapper[4812]: I0216 13:57:47.221374 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.221341203 podStartE2EDuration="3.221341203s" podCreationTimestamp="2026-02-16 13:57:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:57:47.219227532 +0000 UTC m=+1556.283558233" watchObservedRunningTime="2026-02-16 13:57:47.221341203 +0000 UTC m=+1556.285671894" Feb 16 13:57:47 crc kubenswrapper[4812]: I0216 13:57:47.295280 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:57:47 crc kubenswrapper[4812]: I0216 13:57:47.562794 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:57:47 crc kubenswrapper[4812]: I0216 13:57:47.600780 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:57:47 crc kubenswrapper[4812]: E0216 13:57:47.601595 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebb386ce-7ac7-465f-952e-ba006a49411d" containerName="nova-scheduler-scheduler" Feb 16 13:57:47 crc kubenswrapper[4812]: I0216 13:57:47.601614 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebb386ce-7ac7-465f-952e-ba006a49411d" containerName="nova-scheduler-scheduler" Feb 16 13:57:47 crc kubenswrapper[4812]: I0216 13:57:47.601904 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebb386ce-7ac7-465f-952e-ba006a49411d" containerName="nova-scheduler-scheduler" Feb 16 13:57:47 crc kubenswrapper[4812]: I0216 13:57:47.603141 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 13:57:47 crc kubenswrapper[4812]: I0216 13:57:47.609971 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 13:57:47 crc kubenswrapper[4812]: I0216 13:57:47.615330 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:57:47 crc kubenswrapper[4812]: I0216 13:57:47.620177 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16fc436e-ef2a-4aa9-ad5f-1da36fb18a41-config-data\") pod \"nova-scheduler-0\" (UID: \"16fc436e-ef2a-4aa9-ad5f-1da36fb18a41\") " pod="openstack/nova-scheduler-0" Feb 16 13:57:47 crc kubenswrapper[4812]: I0216 13:57:47.620692 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16fc436e-ef2a-4aa9-ad5f-1da36fb18a41-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"16fc436e-ef2a-4aa9-ad5f-1da36fb18a41\") " pod="openstack/nova-scheduler-0" Feb 16 13:57:47 crc kubenswrapper[4812]: I0216 13:57:47.620730 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5jqv\" (UniqueName: \"kubernetes.io/projected/16fc436e-ef2a-4aa9-ad5f-1da36fb18a41-kube-api-access-b5jqv\") pod \"nova-scheduler-0\" (UID: \"16fc436e-ef2a-4aa9-ad5f-1da36fb18a41\") " pod="openstack/nova-scheduler-0" Feb 16 13:57:47 crc kubenswrapper[4812]: I0216 13:57:47.638557 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:57:47 crc kubenswrapper[4812]: I0216 13:57:47.723690 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16fc436e-ef2a-4aa9-ad5f-1da36fb18a41-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"16fc436e-ef2a-4aa9-ad5f-1da36fb18a41\") " pod="openstack/nova-scheduler-0" Feb 16 13:57:47 crc kubenswrapper[4812]: I0216 13:57:47.723761 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5jqv\" (UniqueName: \"kubernetes.io/projected/16fc436e-ef2a-4aa9-ad5f-1da36fb18a41-kube-api-access-b5jqv\") pod \"nova-scheduler-0\" (UID: \"16fc436e-ef2a-4aa9-ad5f-1da36fb18a41\") " pod="openstack/nova-scheduler-0" Feb 16 13:57:47 crc kubenswrapper[4812]: I0216 13:57:47.723972 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16fc436e-ef2a-4aa9-ad5f-1da36fb18a41-config-data\") pod \"nova-scheduler-0\" (UID: \"16fc436e-ef2a-4aa9-ad5f-1da36fb18a41\") " pod="openstack/nova-scheduler-0" Feb 16 13:57:47 crc kubenswrapper[4812]: I0216 13:57:47.731355 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16fc436e-ef2a-4aa9-ad5f-1da36fb18a41-config-data\") pod \"nova-scheduler-0\" (UID: \"16fc436e-ef2a-4aa9-ad5f-1da36fb18a41\") " pod="openstack/nova-scheduler-0" Feb 16 13:57:47 crc kubenswrapper[4812]: I0216 13:57:47.732184 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16fc436e-ef2a-4aa9-ad5f-1da36fb18a41-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"16fc436e-ef2a-4aa9-ad5f-1da36fb18a41\") " pod="openstack/nova-scheduler-0" Feb 16 13:57:47 crc kubenswrapper[4812]: I0216 13:57:47.749698 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5jqv\" (UniqueName: \"kubernetes.io/projected/16fc436e-ef2a-4aa9-ad5f-1da36fb18a41-kube-api-access-b5jqv\") pod \"nova-scheduler-0\" (UID: \"16fc436e-ef2a-4aa9-ad5f-1da36fb18a41\") " pod="openstack/nova-scheduler-0" Feb 16 13:57:47 crc kubenswrapper[4812]: I0216 13:57:47.903049 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2869bf1-5702-4053-b414-f1fa8ba4f481" path="/var/lib/kubelet/pods/e2869bf1-5702-4053-b414-f1fa8ba4f481/volumes" Feb 16 13:57:47 crc kubenswrapper[4812]: I0216 13:57:47.903812 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebb386ce-7ac7-465f-952e-ba006a49411d" path="/var/lib/kubelet/pods/ebb386ce-7ac7-465f-952e-ba006a49411d/volumes" Feb 16 13:57:47 crc kubenswrapper[4812]: I0216 13:57:47.951052 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 13:57:48 crc kubenswrapper[4812]: I0216 13:57:48.323640 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:57:48 crc kubenswrapper[4812]: I0216 13:57:48.324577 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c2b86c67-a2d2-4146-a4f1-46bae3ff6975" containerName="ceilometer-central-agent" containerID="cri-o://984afce9e09b6642d6997623b59afefd1e45a0c4b5257dd3859afd38a198b65e" gracePeriod=30 Feb 16 13:57:48 crc kubenswrapper[4812]: I0216 13:57:48.326406 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c2b86c67-a2d2-4146-a4f1-46bae3ff6975" containerName="proxy-httpd" containerID="cri-o://2973c3ca880517f2f97ba2c9aec5581db6a1306d56a6f37e649e47908ca17f46" gracePeriod=30 Feb 16 13:57:48 crc kubenswrapper[4812]: I0216 13:57:48.326538 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c2b86c67-a2d2-4146-a4f1-46bae3ff6975" containerName="sg-core" containerID="cri-o://b73221b6add0c9f2b9a6a2a5d469014eb65efb75559fc8afab9963f99a8f672e" gracePeriod=30 Feb 16 13:57:48 crc kubenswrapper[4812]: I0216 13:57:48.326605 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c2b86c67-a2d2-4146-a4f1-46bae3ff6975" containerName="ceilometer-notification-agent" containerID="cri-o://d98c683017ff6532d04cccdd3384d2febb5ae3f24620c374a858f107cb14ba52" gracePeriod=30 Feb 16 13:57:48 crc kubenswrapper[4812]: I0216 13:57:48.345061 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f508573d-dccc-4922-9173-48c8c9a8e134","Type":"ContainerStarted","Data":"2f9fee57edab89e2cca731450494bf5af862bffae085c7fc22a77ba6e0bb0ad3"} Feb 16 13:57:48 crc kubenswrapper[4812]: I0216 13:57:48.345793 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 16 13:57:48 crc kubenswrapper[4812]: I0216 13:57:48.348279 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7af325ac-63cf-45be-8fd3-564597076853","Type":"ContainerStarted","Data":"469d9ba20dc1f86770af02f2bc1c8495552dd556a9aff1be43e929870af6cd78"} Feb 16 13:57:48 crc kubenswrapper[4812]: I0216 13:57:48.348349 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7af325ac-63cf-45be-8fd3-564597076853","Type":"ContainerStarted","Data":"4f10136fe9bcc532aad846a8acbb11f951d91b74ad1ec20fc98881619640f37b"} Feb 16 13:57:48 crc kubenswrapper[4812]: I0216 13:57:48.394916 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.9047825339999997 podStartE2EDuration="3.394889771s" podCreationTimestamp="2026-02-16 13:57:45 +0000 UTC" firstStartedPulling="2026-02-16 13:57:46.637492825 +0000 UTC m=+1555.701823526" lastFinishedPulling="2026-02-16 13:57:47.127600062 +0000 UTC m=+1556.191930763" observedRunningTime="2026-02-16 13:57:48.370826455 +0000 UTC m=+1557.435157176" watchObservedRunningTime="2026-02-16 13:57:48.394889771 +0000 UTC m=+1557.459220462" Feb 16 13:57:48 crc kubenswrapper[4812]: W0216 13:57:48.744016 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16fc436e_ef2a_4aa9_ad5f_1da36fb18a41.slice/crio-1c96790a3a73d50b81fffe7e6b4901ec61efb936d49210371cd4435550118979 WatchSource:0}: Error finding container 1c96790a3a73d50b81fffe7e6b4901ec61efb936d49210371cd4435550118979: Status 404 returned error can't find the container with id 1c96790a3a73d50b81fffe7e6b4901ec61efb936d49210371cd4435550118979 Feb 16 13:57:48 crc kubenswrapper[4812]: I0216 13:57:48.748400 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:57:49 crc kubenswrapper[4812]: I0216 13:57:49.380759 4812 generic.go:334] "Generic (PLEG): container finished" podID="c2b86c67-a2d2-4146-a4f1-46bae3ff6975" containerID="2973c3ca880517f2f97ba2c9aec5581db6a1306d56a6f37e649e47908ca17f46" exitCode=0 Feb 16 13:57:49 crc kubenswrapper[4812]: I0216 13:57:49.381260 4812 generic.go:334] "Generic (PLEG): container finished" podID="c2b86c67-a2d2-4146-a4f1-46bae3ff6975" containerID="b73221b6add0c9f2b9a6a2a5d469014eb65efb75559fc8afab9963f99a8f672e" exitCode=2 Feb 16 13:57:49 crc kubenswrapper[4812]: I0216 13:57:49.381284 4812 generic.go:334] "Generic (PLEG): container finished" podID="c2b86c67-a2d2-4146-a4f1-46bae3ff6975" containerID="984afce9e09b6642d6997623b59afefd1e45a0c4b5257dd3859afd38a198b65e" exitCode=0 Feb 16 13:57:49 crc kubenswrapper[4812]: I0216 13:57:49.380921 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2b86c67-a2d2-4146-a4f1-46bae3ff6975","Type":"ContainerDied","Data":"2973c3ca880517f2f97ba2c9aec5581db6a1306d56a6f37e649e47908ca17f46"} Feb 16 13:57:49 crc kubenswrapper[4812]: I0216 13:57:49.381383 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2b86c67-a2d2-4146-a4f1-46bae3ff6975","Type":"ContainerDied","Data":"b73221b6add0c9f2b9a6a2a5d469014eb65efb75559fc8afab9963f99a8f672e"} Feb 16 13:57:49 crc kubenswrapper[4812]: I0216 13:57:49.381405 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2b86c67-a2d2-4146-a4f1-46bae3ff6975","Type":"ContainerDied","Data":"984afce9e09b6642d6997623b59afefd1e45a0c4b5257dd3859afd38a198b65e"} Feb 16 13:57:49 crc kubenswrapper[4812]: I0216 13:57:49.387791 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"16fc436e-ef2a-4aa9-ad5f-1da36fb18a41","Type":"ContainerStarted","Data":"eb54dbfc6f57d2bf16293e83c97b308738834698c42fd8028cbc20cb07c6bd40"} Feb 16 13:57:49 crc kubenswrapper[4812]: I0216 13:57:49.387851 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"16fc436e-ef2a-4aa9-ad5f-1da36fb18a41","Type":"ContainerStarted","Data":"1c96790a3a73d50b81fffe7e6b4901ec61efb936d49210371cd4435550118979"} Feb 16 13:57:49 crc kubenswrapper[4812]: I0216 13:57:49.393248 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7af325ac-63cf-45be-8fd3-564597076853","Type":"ContainerStarted","Data":"a61ef979f04212710d67616625e45a5ebdca870badc18af177201a8cd692200f"} Feb 16 13:57:49 crc kubenswrapper[4812]: I0216 13:57:49.423763 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.423715742 podStartE2EDuration="2.423715742s" podCreationTimestamp="2026-02-16 13:57:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:57:49.407690499 +0000 UTC m=+1558.472021220" watchObservedRunningTime="2026-02-16 13:57:49.423715742 +0000 UTC m=+1558.488046443" Feb 16 13:57:49 crc kubenswrapper[4812]: I0216 13:57:49.444103 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.444056221 podStartE2EDuration="3.444056221s" podCreationTimestamp="2026-02-16 13:57:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:57:49.435931806 +0000 UTC m=+1558.500262507" watchObservedRunningTime="2026-02-16 13:57:49.444056221 +0000 UTC m=+1558.508386922" Feb 16 13:57:49 crc kubenswrapper[4812]: I0216 13:57:49.742844 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 13:57:49 crc kubenswrapper[4812]: I0216 13:57:49.744871 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 13:57:50 crc kubenswrapper[4812]: E0216 13:57:50.881071 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 13:57:52 crc kubenswrapper[4812]: I0216 13:57:52.951790 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 13:57:54 crc kubenswrapper[4812]: I0216 13:57:54.529846 4812 generic.go:334] "Generic (PLEG): container finished" podID="c2b86c67-a2d2-4146-a4f1-46bae3ff6975" containerID="d98c683017ff6532d04cccdd3384d2febb5ae3f24620c374a858f107cb14ba52" exitCode=0 Feb 16 13:57:54 crc kubenswrapper[4812]: I0216 13:57:54.529972 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2b86c67-a2d2-4146-a4f1-46bae3ff6975","Type":"ContainerDied","Data":"d98c683017ff6532d04cccdd3384d2febb5ae3f24620c374a858f107cb14ba52"} Feb 16 13:57:54 crc kubenswrapper[4812]: I0216 13:57:54.744135 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 13:57:54 crc kubenswrapper[4812]: I0216 13:57:54.744211 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 13:57:54 crc kubenswrapper[4812]: I0216 13:57:54.845723 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.042234 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.216512 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2b86c67-a2d2-4146-a4f1-46bae3ff6975-run-httpd\") pod \"c2b86c67-a2d2-4146-a4f1-46bae3ff6975\" (UID: \"c2b86c67-a2d2-4146-a4f1-46bae3ff6975\") " Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.216615 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c2b86c67-a2d2-4146-a4f1-46bae3ff6975-sg-core-conf-yaml\") pod \"c2b86c67-a2d2-4146-a4f1-46bae3ff6975\" (UID: \"c2b86c67-a2d2-4146-a4f1-46bae3ff6975\") " Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.216659 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2b86c67-a2d2-4146-a4f1-46bae3ff6975-combined-ca-bundle\") pod \"c2b86c67-a2d2-4146-a4f1-46bae3ff6975\" (UID: \"c2b86c67-a2d2-4146-a4f1-46bae3ff6975\") " Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.216772 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2b86c67-a2d2-4146-a4f1-46bae3ff6975-scripts\") pod \"c2b86c67-a2d2-4146-a4f1-46bae3ff6975\" (UID: \"c2b86c67-a2d2-4146-a4f1-46bae3ff6975\") " Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.216891 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrlhg\" (UniqueName: \"kubernetes.io/projected/c2b86c67-a2d2-4146-a4f1-46bae3ff6975-kube-api-access-nrlhg\") pod \"c2b86c67-a2d2-4146-a4f1-46bae3ff6975\" (UID: \"c2b86c67-a2d2-4146-a4f1-46bae3ff6975\") " Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.216942 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2b86c67-a2d2-4146-a4f1-46bae3ff6975-log-httpd\") pod \"c2b86c67-a2d2-4146-a4f1-46bae3ff6975\" (UID: \"c2b86c67-a2d2-4146-a4f1-46bae3ff6975\") " Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.217053 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2b86c67-a2d2-4146-a4f1-46bae3ff6975-config-data\") pod \"c2b86c67-a2d2-4146-a4f1-46bae3ff6975\" (UID: \"c2b86c67-a2d2-4146-a4f1-46bae3ff6975\") " Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.217196 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2b86c67-a2d2-4146-a4f1-46bae3ff6975-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c2b86c67-a2d2-4146-a4f1-46bae3ff6975" (UID: "c2b86c67-a2d2-4146-a4f1-46bae3ff6975"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.217837 4812 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2b86c67-a2d2-4146-a4f1-46bae3ff6975-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.218399 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2b86c67-a2d2-4146-a4f1-46bae3ff6975-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c2b86c67-a2d2-4146-a4f1-46bae3ff6975" (UID: "c2b86c67-a2d2-4146-a4f1-46bae3ff6975"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.226302 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2b86c67-a2d2-4146-a4f1-46bae3ff6975-kube-api-access-nrlhg" (OuterVolumeSpecName: "kube-api-access-nrlhg") pod "c2b86c67-a2d2-4146-a4f1-46bae3ff6975" (UID: "c2b86c67-a2d2-4146-a4f1-46bae3ff6975"). InnerVolumeSpecName "kube-api-access-nrlhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.227687 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2b86c67-a2d2-4146-a4f1-46bae3ff6975-scripts" (OuterVolumeSpecName: "scripts") pod "c2b86c67-a2d2-4146-a4f1-46bae3ff6975" (UID: "c2b86c67-a2d2-4146-a4f1-46bae3ff6975"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.255322 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2b86c67-a2d2-4146-a4f1-46bae3ff6975-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c2b86c67-a2d2-4146-a4f1-46bae3ff6975" (UID: "c2b86c67-a2d2-4146-a4f1-46bae3ff6975"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.321795 4812 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c2b86c67-a2d2-4146-a4f1-46bae3ff6975-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.321845 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2b86c67-a2d2-4146-a4f1-46bae3ff6975-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.321858 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrlhg\" (UniqueName: \"kubernetes.io/projected/c2b86c67-a2d2-4146-a4f1-46bae3ff6975-kube-api-access-nrlhg\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.321877 4812 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2b86c67-a2d2-4146-a4f1-46bae3ff6975-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.349884 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2b86c67-a2d2-4146-a4f1-46bae3ff6975-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c2b86c67-a2d2-4146-a4f1-46bae3ff6975" (UID: "c2b86c67-a2d2-4146-a4f1-46bae3ff6975"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.366543 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2b86c67-a2d2-4146-a4f1-46bae3ff6975-config-data" (OuterVolumeSpecName: "config-data") pod "c2b86c67-a2d2-4146-a4f1-46bae3ff6975" (UID: "c2b86c67-a2d2-4146-a4f1-46bae3ff6975"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.424620 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2b86c67-a2d2-4146-a4f1-46bae3ff6975-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.424663 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2b86c67-a2d2-4146-a4f1-46bae3ff6975-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.689528 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2b86c67-a2d2-4146-a4f1-46bae3ff6975","Type":"ContainerDied","Data":"8d97c67a49bb088a1118b897b166406892a5a368bbf49558de951f4c7f4b5e06"} Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.690091 4812 scope.go:117] "RemoveContainer" containerID="2973c3ca880517f2f97ba2c9aec5581db6a1306d56a6f37e649e47908ca17f46" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.690228 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.762417 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.773949 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.818501 4812 scope.go:117] "RemoveContainer" containerID="b73221b6add0c9f2b9a6a2a5d469014eb65efb75559fc8afab9963f99a8f672e" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.828853 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ad25a505-b306-47ee-92dc-19b8635d455b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.217:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.828853 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ad25a505-b306-47ee-92dc-19b8635d455b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.217:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.828956 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:57:55 crc kubenswrapper[4812]: E0216 13:57:55.830745 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2b86c67-a2d2-4146-a4f1-46bae3ff6975" containerName="proxy-httpd" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.830803 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2b86c67-a2d2-4146-a4f1-46bae3ff6975" containerName="proxy-httpd" Feb 16 13:57:55 crc kubenswrapper[4812]: E0216 13:57:55.830828 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2b86c67-a2d2-4146-a4f1-46bae3ff6975" containerName="ceilometer-central-agent" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.830840 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2b86c67-a2d2-4146-a4f1-46bae3ff6975" containerName="ceilometer-central-agent" Feb 16 13:57:55 crc kubenswrapper[4812]: E0216 13:57:55.830862 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2b86c67-a2d2-4146-a4f1-46bae3ff6975" containerName="sg-core" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.830872 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2b86c67-a2d2-4146-a4f1-46bae3ff6975" containerName="sg-core" Feb 16 13:57:55 crc kubenswrapper[4812]: E0216 13:57:55.830976 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2b86c67-a2d2-4146-a4f1-46bae3ff6975" containerName="ceilometer-notification-agent" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.830985 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2b86c67-a2d2-4146-a4f1-46bae3ff6975" containerName="ceilometer-notification-agent" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.831298 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2b86c67-a2d2-4146-a4f1-46bae3ff6975" containerName="ceilometer-central-agent" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.831330 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2b86c67-a2d2-4146-a4f1-46bae3ff6975" containerName="proxy-httpd" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.831355 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2b86c67-a2d2-4146-a4f1-46bae3ff6975" containerName="ceilometer-notification-agent" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.831370 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2b86c67-a2d2-4146-a4f1-46bae3ff6975" containerName="sg-core" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.852249 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.853793 4812 scope.go:117] "RemoveContainer" containerID="d98c683017ff6532d04cccdd3384d2febb5ae3f24620c374a858f107cb14ba52" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.858879 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.859127 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.859217 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.871546 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecb30c4d-e441-43a1-927b-02cac798ff1e-log-httpd\") pod \"ceilometer-0\" (UID: \"ecb30c4d-e441-43a1-927b-02cac798ff1e\") " pod="openstack/ceilometer-0" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.872012 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ecb30c4d-e441-43a1-927b-02cac798ff1e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ecb30c4d-e441-43a1-927b-02cac798ff1e\") " pod="openstack/ceilometer-0" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.872203 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecb30c4d-e441-43a1-927b-02cac798ff1e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ecb30c4d-e441-43a1-927b-02cac798ff1e\") " pod="openstack/ceilometer-0" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.872418 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecb30c4d-e441-43a1-927b-02cac798ff1e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ecb30c4d-e441-43a1-927b-02cac798ff1e\") " pod="openstack/ceilometer-0" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.872683 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecb30c4d-e441-43a1-927b-02cac798ff1e-config-data\") pod \"ceilometer-0\" (UID: \"ecb30c4d-e441-43a1-927b-02cac798ff1e\") " pod="openstack/ceilometer-0" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.872802 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecb30c4d-e441-43a1-927b-02cac798ff1e-run-httpd\") pod \"ceilometer-0\" (UID: \"ecb30c4d-e441-43a1-927b-02cac798ff1e\") " pod="openstack/ceilometer-0" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.872993 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv85p\" (UniqueName: \"kubernetes.io/projected/ecb30c4d-e441-43a1-927b-02cac798ff1e-kube-api-access-bv85p\") pod \"ceilometer-0\" (UID: \"ecb30c4d-e441-43a1-927b-02cac798ff1e\") " pod="openstack/ceilometer-0" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.873126 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecb30c4d-e441-43a1-927b-02cac798ff1e-scripts\") pod \"ceilometer-0\" (UID: \"ecb30c4d-e441-43a1-927b-02cac798ff1e\") " pod="openstack/ceilometer-0" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.901889 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2b86c67-a2d2-4146-a4f1-46bae3ff6975" path="/var/lib/kubelet/pods/c2b86c67-a2d2-4146-a4f1-46bae3ff6975/volumes" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.903121 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.903795 4812 scope.go:117] "RemoveContainer" containerID="984afce9e09b6642d6997623b59afefd1e45a0c4b5257dd3859afd38a198b65e" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.976304 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecb30c4d-e441-43a1-927b-02cac798ff1e-run-httpd\") pod \"ceilometer-0\" (UID: \"ecb30c4d-e441-43a1-927b-02cac798ff1e\") " pod="openstack/ceilometer-0" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.976512 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv85p\" (UniqueName: \"kubernetes.io/projected/ecb30c4d-e441-43a1-927b-02cac798ff1e-kube-api-access-bv85p\") pod \"ceilometer-0\" (UID: \"ecb30c4d-e441-43a1-927b-02cac798ff1e\") " pod="openstack/ceilometer-0" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.976607 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecb30c4d-e441-43a1-927b-02cac798ff1e-scripts\") pod \"ceilometer-0\" (UID: \"ecb30c4d-e441-43a1-927b-02cac798ff1e\") " pod="openstack/ceilometer-0" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.976751 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecb30c4d-e441-43a1-927b-02cac798ff1e-log-httpd\") pod \"ceilometer-0\" (UID: \"ecb30c4d-e441-43a1-927b-02cac798ff1e\") " pod="openstack/ceilometer-0" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.976808 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ecb30c4d-e441-43a1-927b-02cac798ff1e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ecb30c4d-e441-43a1-927b-02cac798ff1e\") " pod="openstack/ceilometer-0" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.976876 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecb30c4d-e441-43a1-927b-02cac798ff1e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ecb30c4d-e441-43a1-927b-02cac798ff1e\") " pod="openstack/ceilometer-0" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.976946 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecb30c4d-e441-43a1-927b-02cac798ff1e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ecb30c4d-e441-43a1-927b-02cac798ff1e\") " pod="openstack/ceilometer-0" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.977063 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecb30c4d-e441-43a1-927b-02cac798ff1e-config-data\") pod \"ceilometer-0\" (UID: \"ecb30c4d-e441-43a1-927b-02cac798ff1e\") " pod="openstack/ceilometer-0" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.977335 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecb30c4d-e441-43a1-927b-02cac798ff1e-run-httpd\") pod \"ceilometer-0\" (UID: \"ecb30c4d-e441-43a1-927b-02cac798ff1e\") " pod="openstack/ceilometer-0" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.977524 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecb30c4d-e441-43a1-927b-02cac798ff1e-log-httpd\") pod \"ceilometer-0\" (UID: \"ecb30c4d-e441-43a1-927b-02cac798ff1e\") " pod="openstack/ceilometer-0" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.982997 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecb30c4d-e441-43a1-927b-02cac798ff1e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ecb30c4d-e441-43a1-927b-02cac798ff1e\") " pod="openstack/ceilometer-0" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.986937 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecb30c4d-e441-43a1-927b-02cac798ff1e-config-data\") pod \"ceilometer-0\" (UID: \"ecb30c4d-e441-43a1-927b-02cac798ff1e\") " pod="openstack/ceilometer-0" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.986956 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecb30c4d-e441-43a1-927b-02cac798ff1e-scripts\") pod \"ceilometer-0\" (UID: \"ecb30c4d-e441-43a1-927b-02cac798ff1e\") " pod="openstack/ceilometer-0" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.988849 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ecb30c4d-e441-43a1-927b-02cac798ff1e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ecb30c4d-e441-43a1-927b-02cac798ff1e\") " pod="openstack/ceilometer-0" Feb 16 13:57:55 crc kubenswrapper[4812]: I0216 13:57:55.989747 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecb30c4d-e441-43a1-927b-02cac798ff1e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ecb30c4d-e441-43a1-927b-02cac798ff1e\") " pod="openstack/ceilometer-0" Feb 16 13:57:56 crc kubenswrapper[4812]: I0216 13:57:55.999999 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv85p\" (UniqueName: \"kubernetes.io/projected/ecb30c4d-e441-43a1-927b-02cac798ff1e-kube-api-access-bv85p\") pod \"ceilometer-0\" (UID: \"ecb30c4d-e441-43a1-927b-02cac798ff1e\") " pod="openstack/ceilometer-0" Feb 16 13:57:56 crc kubenswrapper[4812]: I0216 13:57:56.039139 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 16 13:57:56 crc kubenswrapper[4812]: I0216 13:57:56.198646 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:57:56 crc kubenswrapper[4812]: I0216 13:57:56.667690 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 13:57:56 crc kubenswrapper[4812]: I0216 13:57:56.668205 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 13:57:56 crc kubenswrapper[4812]: I0216 13:57:56.768970 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:57:57 crc kubenswrapper[4812]: I0216 13:57:57.743226 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecb30c4d-e441-43a1-927b-02cac798ff1e","Type":"ContainerStarted","Data":"41fee1f53c185589a9144a499b0e6b7be154d2adde33db6c95554e53f86a495f"} Feb 16 13:57:57 crc kubenswrapper[4812]: I0216 13:57:57.744252 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecb30c4d-e441-43a1-927b-02cac798ff1e","Type":"ContainerStarted","Data":"42fe564406618ae984e62c900dad951027e7915519fc7e4bb5d141d28d75effb"} Feb 16 13:57:57 crc kubenswrapper[4812]: I0216 13:57:57.754631 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7af325ac-63cf-45be-8fd3-564597076853" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.219:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 13:57:57 crc kubenswrapper[4812]: I0216 13:57:57.754654 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7af325ac-63cf-45be-8fd3-564597076853" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.219:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 13:57:57 crc kubenswrapper[4812]: I0216 13:57:57.951500 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 13:57:57 crc kubenswrapper[4812]: I0216 13:57:57.991201 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 13:57:58 crc kubenswrapper[4812]: I0216 13:57:58.774114 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecb30c4d-e441-43a1-927b-02cac798ff1e","Type":"ContainerStarted","Data":"0dad5788b2a9470177df7ea74340e8bde4257ddd6ae199f4731757fa459e7a2c"} Feb 16 13:57:58 crc kubenswrapper[4812]: I0216 13:57:58.826684 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 13:57:59 crc kubenswrapper[4812]: I0216 13:57:59.796356 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecb30c4d-e441-43a1-927b-02cac798ff1e","Type":"ContainerStarted","Data":"cf213b63078bcce8a99f63e0a093c34e70158cd718c4fec6443287a567bfb7b5"} Feb 16 13:58:01 crc kubenswrapper[4812]: I0216 13:58:01.830628 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecb30c4d-e441-43a1-927b-02cac798ff1e","Type":"ContainerStarted","Data":"7ed8f4b98868bb9d2fd1bf6750ce2628782f78bb4f462c4c1fba86bcbde3da30"} Feb 16 13:58:01 crc kubenswrapper[4812]: I0216 13:58:01.831421 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 13:58:01 crc kubenswrapper[4812]: I0216 13:58:01.990311 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.7607693319999997 podStartE2EDuration="6.990278319s" podCreationTimestamp="2026-02-16 13:57:55 +0000 UTC" firstStartedPulling="2026-02-16 13:57:56.784750905 +0000 UTC m=+1565.849081606" lastFinishedPulling="2026-02-16 13:58:01.014259892 +0000 UTC m=+1570.078590593" observedRunningTime="2026-02-16 13:58:01.865190476 +0000 UTC m=+1570.929521177" watchObservedRunningTime="2026-02-16 13:58:01.990278319 +0000 UTC m=+1571.054609010" Feb 16 13:58:02 crc kubenswrapper[4812]: I0216 13:58:02.872939 4812 generic.go:334] "Generic (PLEG): container finished" podID="7759cc43-520e-4eb8-8911-fb01c660247c" containerID="95731e41b11084dd285640b77c03e99b945351016aa9c1fd9bc6094a0e1efae9" exitCode=137 Feb 16 13:58:02 crc kubenswrapper[4812]: I0216 13:58:02.873123 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7759cc43-520e-4eb8-8911-fb01c660247c","Type":"ContainerDied","Data":"95731e41b11084dd285640b77c03e99b945351016aa9c1fd9bc6094a0e1efae9"} Feb 16 13:58:02 crc kubenswrapper[4812]: E0216 13:58:02.883791 4812 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7759cc43_520e_4eb8_8911_fb01c660247c.slice/crio-conmon-95731e41b11084dd285640b77c03e99b945351016aa9c1fd9bc6094a0e1efae9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7759cc43_520e_4eb8_8911_fb01c660247c.slice/crio-95731e41b11084dd285640b77c03e99b945351016aa9c1fd9bc6094a0e1efae9.scope\": RecentStats: unable to find data in memory cache]" Feb 16 13:58:03 crc kubenswrapper[4812]: I0216 13:58:03.142085 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:58:03 crc kubenswrapper[4812]: I0216 13:58:03.314082 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cswl8\" (UniqueName: \"kubernetes.io/projected/7759cc43-520e-4eb8-8911-fb01c660247c-kube-api-access-cswl8\") pod \"7759cc43-520e-4eb8-8911-fb01c660247c\" (UID: \"7759cc43-520e-4eb8-8911-fb01c660247c\") " Feb 16 13:58:03 crc kubenswrapper[4812]: I0216 13:58:03.314242 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7759cc43-520e-4eb8-8911-fb01c660247c-combined-ca-bundle\") pod \"7759cc43-520e-4eb8-8911-fb01c660247c\" (UID: \"7759cc43-520e-4eb8-8911-fb01c660247c\") " Feb 16 13:58:03 crc kubenswrapper[4812]: I0216 13:58:03.314537 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7759cc43-520e-4eb8-8911-fb01c660247c-config-data\") pod \"7759cc43-520e-4eb8-8911-fb01c660247c\" (UID: \"7759cc43-520e-4eb8-8911-fb01c660247c\") " Feb 16 13:58:03 crc kubenswrapper[4812]: I0216 13:58:03.325287 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7759cc43-520e-4eb8-8911-fb01c660247c-kube-api-access-cswl8" (OuterVolumeSpecName: "kube-api-access-cswl8") pod "7759cc43-520e-4eb8-8911-fb01c660247c" (UID: "7759cc43-520e-4eb8-8911-fb01c660247c"). InnerVolumeSpecName "kube-api-access-cswl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:58:03 crc kubenswrapper[4812]: I0216 13:58:03.522870 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cswl8\" (UniqueName: \"kubernetes.io/projected/7759cc43-520e-4eb8-8911-fb01c660247c-kube-api-access-cswl8\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:03 crc kubenswrapper[4812]: I0216 13:58:03.530416 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7759cc43-520e-4eb8-8911-fb01c660247c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7759cc43-520e-4eb8-8911-fb01c660247c" (UID: "7759cc43-520e-4eb8-8911-fb01c660247c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:58:03 crc kubenswrapper[4812]: I0216 13:58:03.533006 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7759cc43-520e-4eb8-8911-fb01c660247c-config-data" (OuterVolumeSpecName: "config-data") pod "7759cc43-520e-4eb8-8911-fb01c660247c" (UID: "7759cc43-520e-4eb8-8911-fb01c660247c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:58:03 crc kubenswrapper[4812]: I0216 13:58:03.625053 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7759cc43-520e-4eb8-8911-fb01c660247c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:03 crc kubenswrapper[4812]: I0216 13:58:03.625099 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7759cc43-520e-4eb8-8911-fb01c660247c-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:03 crc kubenswrapper[4812]: I0216 13:58:03.895256 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:58:03 crc kubenswrapper[4812]: I0216 13:58:03.917694 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7759cc43-520e-4eb8-8911-fb01c660247c","Type":"ContainerDied","Data":"f4780dcbe4289ee1314ee07975f745079317db27842c444acdd13dfd05cb9ce9"} Feb 16 13:58:03 crc kubenswrapper[4812]: I0216 13:58:03.917844 4812 scope.go:117] "RemoveContainer" containerID="95731e41b11084dd285640b77c03e99b945351016aa9c1fd9bc6094a0e1efae9" Feb 16 13:58:03 crc kubenswrapper[4812]: I0216 13:58:03.961993 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 13:58:03 crc kubenswrapper[4812]: I0216 13:58:03.998414 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 13:58:04 crc kubenswrapper[4812]: I0216 13:58:04.011922 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 13:58:04 crc kubenswrapper[4812]: E0216 13:58:04.012645 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7759cc43-520e-4eb8-8911-fb01c660247c" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 13:58:04 crc kubenswrapper[4812]: I0216 13:58:04.012676 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="7759cc43-520e-4eb8-8911-fb01c660247c" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 13:58:04 crc kubenswrapper[4812]: I0216 13:58:04.012971 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="7759cc43-520e-4eb8-8911-fb01c660247c" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 13:58:04 crc kubenswrapper[4812]: I0216 13:58:04.014108 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:58:04 crc kubenswrapper[4812]: I0216 13:58:04.137800 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 16 13:58:04 crc kubenswrapper[4812]: I0216 13:58:04.138195 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 16 13:58:04 crc kubenswrapper[4812]: I0216 13:58:04.138578 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 16 13:58:04 crc kubenswrapper[4812]: I0216 13:58:04.152789 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e27fec58-8fdf-4df4-890a-ebec94ae3904-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e27fec58-8fdf-4df4-890a-ebec94ae3904\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:58:04 crc kubenswrapper[4812]: I0216 13:58:04.153272 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-794pt\" (UniqueName: \"kubernetes.io/projected/e27fec58-8fdf-4df4-890a-ebec94ae3904-kube-api-access-794pt\") pod \"nova-cell1-novncproxy-0\" (UID: \"e27fec58-8fdf-4df4-890a-ebec94ae3904\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:58:04 crc kubenswrapper[4812]: I0216 13:58:04.153541 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/e27fec58-8fdf-4df4-890a-ebec94ae3904-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e27fec58-8fdf-4df4-890a-ebec94ae3904\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:58:04 crc kubenswrapper[4812]: I0216 13:58:04.153736 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/e27fec58-8fdf-4df4-890a-ebec94ae3904-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e27fec58-8fdf-4df4-890a-ebec94ae3904\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:58:04 crc kubenswrapper[4812]: I0216 13:58:04.153852 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e27fec58-8fdf-4df4-890a-ebec94ae3904-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e27fec58-8fdf-4df4-890a-ebec94ae3904\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:58:04 crc kubenswrapper[4812]: I0216 13:58:04.209879 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 13:58:04 crc kubenswrapper[4812]: I0216 13:58:04.263841 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e27fec58-8fdf-4df4-890a-ebec94ae3904-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e27fec58-8fdf-4df4-890a-ebec94ae3904\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:58:04 crc kubenswrapper[4812]: I0216 13:58:04.264190 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-794pt\" (UniqueName: \"kubernetes.io/projected/e27fec58-8fdf-4df4-890a-ebec94ae3904-kube-api-access-794pt\") pod \"nova-cell1-novncproxy-0\" (UID: \"e27fec58-8fdf-4df4-890a-ebec94ae3904\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:58:04 crc kubenswrapper[4812]: I0216 13:58:04.264411 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/e27fec58-8fdf-4df4-890a-ebec94ae3904-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e27fec58-8fdf-4df4-890a-ebec94ae3904\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:58:04 crc kubenswrapper[4812]: I0216 13:58:04.264506 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/e27fec58-8fdf-4df4-890a-ebec94ae3904-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e27fec58-8fdf-4df4-890a-ebec94ae3904\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:58:04 crc kubenswrapper[4812]: I0216 13:58:04.264539 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e27fec58-8fdf-4df4-890a-ebec94ae3904-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e27fec58-8fdf-4df4-890a-ebec94ae3904\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:58:04 crc kubenswrapper[4812]: I0216 13:58:04.269615 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/e27fec58-8fdf-4df4-890a-ebec94ae3904-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e27fec58-8fdf-4df4-890a-ebec94ae3904\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:58:04 crc kubenswrapper[4812]: I0216 13:58:04.283199 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e27fec58-8fdf-4df4-890a-ebec94ae3904-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e27fec58-8fdf-4df4-890a-ebec94ae3904\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:58:04 crc kubenswrapper[4812]: I0216 13:58:04.283579 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/e27fec58-8fdf-4df4-890a-ebec94ae3904-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e27fec58-8fdf-4df4-890a-ebec94ae3904\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:58:04 crc kubenswrapper[4812]: I0216 13:58:04.284612 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e27fec58-8fdf-4df4-890a-ebec94ae3904-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e27fec58-8fdf-4df4-890a-ebec94ae3904\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:58:04 crc kubenswrapper[4812]: I0216 13:58:04.289956 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-794pt\" (UniqueName: \"kubernetes.io/projected/e27fec58-8fdf-4df4-890a-ebec94ae3904-kube-api-access-794pt\") pod \"nova-cell1-novncproxy-0\" (UID: \"e27fec58-8fdf-4df4-890a-ebec94ae3904\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:58:04 crc kubenswrapper[4812]: I0216 13:58:04.470263 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:58:04 crc kubenswrapper[4812]: I0216 13:58:04.758187 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 13:58:04 crc kubenswrapper[4812]: I0216 13:58:04.758915 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 13:58:04 crc kubenswrapper[4812]: I0216 13:58:04.768764 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 13:58:04 crc kubenswrapper[4812]: I0216 13:58:04.772664 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 13:58:05 crc kubenswrapper[4812]: I0216 13:58:05.178878 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 13:58:05 crc kubenswrapper[4812]: E0216 13:58:05.882063 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 13:58:05 crc kubenswrapper[4812]: I0216 13:58:05.901164 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7759cc43-520e-4eb8-8911-fb01c660247c" path="/var/lib/kubelet/pods/7759cc43-520e-4eb8-8911-fb01c660247c/volumes" Feb 16 13:58:05 crc kubenswrapper[4812]: I0216 13:58:05.998074 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e27fec58-8fdf-4df4-890a-ebec94ae3904","Type":"ContainerStarted","Data":"676c60ed34da788a25016d201317b09e1423b16341567890ffcbd1351574e884"} Feb 16 13:58:05 crc kubenswrapper[4812]: I0216 13:58:05.998187 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e27fec58-8fdf-4df4-890a-ebec94ae3904","Type":"ContainerStarted","Data":"9c23da12916a98fcd375bd3ce9b5c429d5b98a69e228a48ce9c586ef8e28a9a0"} Feb 16 13:58:06 crc kubenswrapper[4812]: I0216 13:58:06.672474 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 13:58:06 crc kubenswrapper[4812]: I0216 13:58:06.673376 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 13:58:06 crc kubenswrapper[4812]: I0216 13:58:06.676495 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 13:58:06 crc kubenswrapper[4812]: I0216 13:58:06.684617 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 13:58:06 crc kubenswrapper[4812]: I0216 13:58:06.711243 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.711199797 podStartE2EDuration="3.711199797s" podCreationTimestamp="2026-02-16 13:58:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:58:06.033007629 +0000 UTC m=+1575.097338350" watchObservedRunningTime="2026-02-16 13:58:06.711199797 +0000 UTC m=+1575.775530498" Feb 16 13:58:07 crc kubenswrapper[4812]: I0216 13:58:07.010106 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 13:58:07 crc kubenswrapper[4812]: I0216 13:58:07.015312 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 13:58:07 crc kubenswrapper[4812]: I0216 13:58:07.294814 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-6sbj2"] Feb 16 13:58:07 crc kubenswrapper[4812]: I0216 13:58:07.297939 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-6sbj2" Feb 16 13:58:07 crc kubenswrapper[4812]: I0216 13:58:07.318537 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-6sbj2"] Feb 16 13:58:07 crc kubenswrapper[4812]: I0216 13:58:07.455828 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47232a67-6356-4806-83a7-74719fb464fc-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-6sbj2\" (UID: \"47232a67-6356-4806-83a7-74719fb464fc\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6sbj2" Feb 16 13:58:07 crc kubenswrapper[4812]: I0216 13:58:07.455961 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47232a67-6356-4806-83a7-74719fb464fc-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-6sbj2\" (UID: \"47232a67-6356-4806-83a7-74719fb464fc\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6sbj2" Feb 16 13:58:07 crc kubenswrapper[4812]: I0216 13:58:07.456026 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47232a67-6356-4806-83a7-74719fb464fc-config\") pod \"dnsmasq-dns-89c5cd4d5-6sbj2\" (UID: \"47232a67-6356-4806-83a7-74719fb464fc\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6sbj2" Feb 16 13:58:07 crc kubenswrapper[4812]: I0216 13:58:07.456080 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47232a67-6356-4806-83a7-74719fb464fc-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-6sbj2\" (UID: \"47232a67-6356-4806-83a7-74719fb464fc\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6sbj2" Feb 16 13:58:07 crc kubenswrapper[4812]: I0216 13:58:07.456135 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47232a67-6356-4806-83a7-74719fb464fc-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-6sbj2\" (UID: \"47232a67-6356-4806-83a7-74719fb464fc\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6sbj2" Feb 16 13:58:07 crc kubenswrapper[4812]: I0216 13:58:07.456181 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv955\" (UniqueName: \"kubernetes.io/projected/47232a67-6356-4806-83a7-74719fb464fc-kube-api-access-fv955\") pod \"dnsmasq-dns-89c5cd4d5-6sbj2\" (UID: \"47232a67-6356-4806-83a7-74719fb464fc\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6sbj2" Feb 16 13:58:07 crc kubenswrapper[4812]: I0216 13:58:07.560793 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47232a67-6356-4806-83a7-74719fb464fc-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-6sbj2\" (UID: \"47232a67-6356-4806-83a7-74719fb464fc\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6sbj2" Feb 16 13:58:07 crc kubenswrapper[4812]: I0216 13:58:07.560937 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47232a67-6356-4806-83a7-74719fb464fc-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-6sbj2\" (UID: \"47232a67-6356-4806-83a7-74719fb464fc\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6sbj2" Feb 16 13:58:07 crc kubenswrapper[4812]: I0216 13:58:07.561036 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47232a67-6356-4806-83a7-74719fb464fc-config\") pod \"dnsmasq-dns-89c5cd4d5-6sbj2\" (UID: \"47232a67-6356-4806-83a7-74719fb464fc\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6sbj2" Feb 16 13:58:07 crc kubenswrapper[4812]: I0216 13:58:07.561104 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47232a67-6356-4806-83a7-74719fb464fc-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-6sbj2\" (UID: \"47232a67-6356-4806-83a7-74719fb464fc\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6sbj2" Feb 16 13:58:07 crc kubenswrapper[4812]: I0216 13:58:07.561166 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47232a67-6356-4806-83a7-74719fb464fc-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-6sbj2\" (UID: \"47232a67-6356-4806-83a7-74719fb464fc\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6sbj2" Feb 16 13:58:07 crc kubenswrapper[4812]: I0216 13:58:07.561221 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv955\" (UniqueName: \"kubernetes.io/projected/47232a67-6356-4806-83a7-74719fb464fc-kube-api-access-fv955\") pod \"dnsmasq-dns-89c5cd4d5-6sbj2\" (UID: \"47232a67-6356-4806-83a7-74719fb464fc\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6sbj2" Feb 16 13:58:07 crc kubenswrapper[4812]: I0216 13:58:07.563129 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47232a67-6356-4806-83a7-74719fb464fc-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-6sbj2\" (UID: \"47232a67-6356-4806-83a7-74719fb464fc\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6sbj2" Feb 16 13:58:07 crc kubenswrapper[4812]: I0216 13:58:07.563160 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47232a67-6356-4806-83a7-74719fb464fc-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-6sbj2\" (UID: \"47232a67-6356-4806-83a7-74719fb464fc\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6sbj2" Feb 16 13:58:07 crc kubenswrapper[4812]: I0216 13:58:07.563415 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47232a67-6356-4806-83a7-74719fb464fc-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-6sbj2\" (UID: \"47232a67-6356-4806-83a7-74719fb464fc\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6sbj2" Feb 16 13:58:07 crc kubenswrapper[4812]: I0216 13:58:07.563545 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47232a67-6356-4806-83a7-74719fb464fc-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-6sbj2\" (UID: \"47232a67-6356-4806-83a7-74719fb464fc\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6sbj2" Feb 16 13:58:07 crc kubenswrapper[4812]: I0216 13:58:07.563962 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47232a67-6356-4806-83a7-74719fb464fc-config\") pod \"dnsmasq-dns-89c5cd4d5-6sbj2\" (UID: \"47232a67-6356-4806-83a7-74719fb464fc\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6sbj2" Feb 16 13:58:07 crc kubenswrapper[4812]: I0216 13:58:07.601913 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv955\" (UniqueName: \"kubernetes.io/projected/47232a67-6356-4806-83a7-74719fb464fc-kube-api-access-fv955\") pod \"dnsmasq-dns-89c5cd4d5-6sbj2\" (UID: \"47232a67-6356-4806-83a7-74719fb464fc\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6sbj2" Feb 16 13:58:07 crc kubenswrapper[4812]: I0216 13:58:07.642157 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-6sbj2" Feb 16 13:58:08 crc kubenswrapper[4812]: I0216 13:58:08.521152 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-6sbj2"] Feb 16 13:58:09 crc kubenswrapper[4812]: I0216 13:58:09.085907 4812 generic.go:334] "Generic (PLEG): container finished" podID="47232a67-6356-4806-83a7-74719fb464fc" containerID="aaabcd28aa1c238c77b0f83559eae6488513a9b856680c368b5203f3d37bd2c8" exitCode=0 Feb 16 13:58:09 crc kubenswrapper[4812]: I0216 13:58:09.088040 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-6sbj2" event={"ID":"47232a67-6356-4806-83a7-74719fb464fc","Type":"ContainerDied","Data":"aaabcd28aa1c238c77b0f83559eae6488513a9b856680c368b5203f3d37bd2c8"} Feb 16 13:58:09 crc kubenswrapper[4812]: I0216 13:58:09.088216 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-6sbj2" event={"ID":"47232a67-6356-4806-83a7-74719fb464fc","Type":"ContainerStarted","Data":"cd6e8a3c711ee1ad39f5b9f4f05cdd759ff965dc85c7a80a3e3197c36ed3346a"} Feb 16 13:58:09 crc kubenswrapper[4812]: I0216 13:58:09.471184 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:58:10 crc kubenswrapper[4812]: I0216 13:58:10.105123 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-6sbj2" event={"ID":"47232a67-6356-4806-83a7-74719fb464fc","Type":"ContainerStarted","Data":"d53d6c28618f372dd9afed4b98b495c2b61809b6d3ea27760307292781865b63"} Feb 16 13:58:10 crc kubenswrapper[4812]: I0216 13:58:10.107152 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-6sbj2" Feb 16 13:58:10 crc kubenswrapper[4812]: I0216 13:58:10.137067 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-6sbj2" podStartSLOduration=3.137037196 podStartE2EDuration="3.137037196s" podCreationTimestamp="2026-02-16 13:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:58:10.133954007 +0000 UTC m=+1579.198284708" watchObservedRunningTime="2026-02-16 13:58:10.137037196 +0000 UTC m=+1579.201367897" Feb 16 13:58:10 crc kubenswrapper[4812]: I0216 13:58:10.313911 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:58:10 crc kubenswrapper[4812]: I0216 13:58:10.314240 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7af325ac-63cf-45be-8fd3-564597076853" containerName="nova-api-log" containerID="cri-o://469d9ba20dc1f86770af02f2bc1c8495552dd556a9aff1be43e929870af6cd78" gracePeriod=30 Feb 16 13:58:10 crc kubenswrapper[4812]: I0216 13:58:10.314500 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7af325ac-63cf-45be-8fd3-564597076853" containerName="nova-api-api" containerID="cri-o://a61ef979f04212710d67616625e45a5ebdca870badc18af177201a8cd692200f" gracePeriod=30 Feb 16 13:58:10 crc kubenswrapper[4812]: I0216 13:58:10.495889 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:58:10 crc kubenswrapper[4812]: I0216 13:58:10.496557 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ecb30c4d-e441-43a1-927b-02cac798ff1e" containerName="sg-core" containerID="cri-o://cf213b63078bcce8a99f63e0a093c34e70158cd718c4fec6443287a567bfb7b5" gracePeriod=30 Feb 16 13:58:10 crc kubenswrapper[4812]: I0216 13:58:10.496773 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ecb30c4d-e441-43a1-927b-02cac798ff1e" containerName="proxy-httpd" containerID="cri-o://7ed8f4b98868bb9d2fd1bf6750ce2628782f78bb4f462c4c1fba86bcbde3da30" gracePeriod=30 Feb 16 13:58:10 crc kubenswrapper[4812]: I0216 13:58:10.496583 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ecb30c4d-e441-43a1-927b-02cac798ff1e" containerName="ceilometer-notification-agent" containerID="cri-o://0dad5788b2a9470177df7ea74340e8bde4257ddd6ae199f4731757fa459e7a2c" gracePeriod=30 Feb 16 13:58:10 crc kubenswrapper[4812]: I0216 13:58:10.496332 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ecb30c4d-e441-43a1-927b-02cac798ff1e" containerName="ceilometer-central-agent" containerID="cri-o://41fee1f53c185589a9144a499b0e6b7be154d2adde33db6c95554e53f86a495f" gracePeriod=30 Feb 16 13:58:11 crc kubenswrapper[4812]: I0216 13:58:11.128435 4812 generic.go:334] "Generic (PLEG): container finished" podID="7af325ac-63cf-45be-8fd3-564597076853" containerID="469d9ba20dc1f86770af02f2bc1c8495552dd556a9aff1be43e929870af6cd78" exitCode=143 Feb 16 13:58:11 crc kubenswrapper[4812]: I0216 13:58:11.128507 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7af325ac-63cf-45be-8fd3-564597076853","Type":"ContainerDied","Data":"469d9ba20dc1f86770af02f2bc1c8495552dd556a9aff1be43e929870af6cd78"} Feb 16 13:58:11 crc kubenswrapper[4812]: I0216 13:58:11.134394 4812 generic.go:334] "Generic (PLEG): container finished" podID="ecb30c4d-e441-43a1-927b-02cac798ff1e" containerID="7ed8f4b98868bb9d2fd1bf6750ce2628782f78bb4f462c4c1fba86bcbde3da30" exitCode=0 Feb 16 13:58:11 crc kubenswrapper[4812]: I0216 13:58:11.134629 4812 generic.go:334] "Generic (PLEG): container finished" podID="ecb30c4d-e441-43a1-927b-02cac798ff1e" containerID="cf213b63078bcce8a99f63e0a093c34e70158cd718c4fec6443287a567bfb7b5" exitCode=2 Feb 16 13:58:11 crc kubenswrapper[4812]: I0216 13:58:11.134496 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecb30c4d-e441-43a1-927b-02cac798ff1e","Type":"ContainerDied","Data":"7ed8f4b98868bb9d2fd1bf6750ce2628782f78bb4f462c4c1fba86bcbde3da30"} Feb 16 13:58:11 crc kubenswrapper[4812]: I0216 13:58:11.135273 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecb30c4d-e441-43a1-927b-02cac798ff1e","Type":"ContainerDied","Data":"cf213b63078bcce8a99f63e0a093c34e70158cd718c4fec6443287a567bfb7b5"} Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.117418 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.152322 4812 generic.go:334] "Generic (PLEG): container finished" podID="ecb30c4d-e441-43a1-927b-02cac798ff1e" containerID="0dad5788b2a9470177df7ea74340e8bde4257ddd6ae199f4731757fa459e7a2c" exitCode=0 Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.152379 4812 generic.go:334] "Generic (PLEG): container finished" podID="ecb30c4d-e441-43a1-927b-02cac798ff1e" containerID="41fee1f53c185589a9144a499b0e6b7be154d2adde33db6c95554e53f86a495f" exitCode=0 Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.153234 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecb30c4d-e441-43a1-927b-02cac798ff1e","Type":"ContainerDied","Data":"0dad5788b2a9470177df7ea74340e8bde4257ddd6ae199f4731757fa459e7a2c"} Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.153333 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecb30c4d-e441-43a1-927b-02cac798ff1e","Type":"ContainerDied","Data":"41fee1f53c185589a9144a499b0e6b7be154d2adde33db6c95554e53f86a495f"} Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.153345 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecb30c4d-e441-43a1-927b-02cac798ff1e","Type":"ContainerDied","Data":"42fe564406618ae984e62c900dad951027e7915519fc7e4bb5d141d28d75effb"} Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.153371 4812 scope.go:117] "RemoveContainer" containerID="7ed8f4b98868bb9d2fd1bf6750ce2628782f78bb4f462c4c1fba86bcbde3da30" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.154303 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.195233 4812 scope.go:117] "RemoveContainer" containerID="cf213b63078bcce8a99f63e0a093c34e70158cd718c4fec6443287a567bfb7b5" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.249196 4812 scope.go:117] "RemoveContainer" containerID="0dad5788b2a9470177df7ea74340e8bde4257ddd6ae199f4731757fa459e7a2c" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.277962 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ecb30c4d-e441-43a1-927b-02cac798ff1e-sg-core-conf-yaml\") pod \"ecb30c4d-e441-43a1-927b-02cac798ff1e\" (UID: \"ecb30c4d-e441-43a1-927b-02cac798ff1e\") " Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.280003 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bv85p\" (UniqueName: \"kubernetes.io/projected/ecb30c4d-e441-43a1-927b-02cac798ff1e-kube-api-access-bv85p\") pod \"ecb30c4d-e441-43a1-927b-02cac798ff1e\" (UID: \"ecb30c4d-e441-43a1-927b-02cac798ff1e\") " Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.280082 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecb30c4d-e441-43a1-927b-02cac798ff1e-run-httpd\") pod \"ecb30c4d-e441-43a1-927b-02cac798ff1e\" (UID: \"ecb30c4d-e441-43a1-927b-02cac798ff1e\") " Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.280212 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecb30c4d-e441-43a1-927b-02cac798ff1e-scripts\") pod \"ecb30c4d-e441-43a1-927b-02cac798ff1e\" (UID: \"ecb30c4d-e441-43a1-927b-02cac798ff1e\") " Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.280267 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecb30c4d-e441-43a1-927b-02cac798ff1e-config-data\") pod \"ecb30c4d-e441-43a1-927b-02cac798ff1e\" (UID: \"ecb30c4d-e441-43a1-927b-02cac798ff1e\") " Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.280306 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecb30c4d-e441-43a1-927b-02cac798ff1e-log-httpd\") pod \"ecb30c4d-e441-43a1-927b-02cac798ff1e\" (UID: \"ecb30c4d-e441-43a1-927b-02cac798ff1e\") " Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.280609 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecb30c4d-e441-43a1-927b-02cac798ff1e-ceilometer-tls-certs\") pod \"ecb30c4d-e441-43a1-927b-02cac798ff1e\" (UID: \"ecb30c4d-e441-43a1-927b-02cac798ff1e\") " Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.280701 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecb30c4d-e441-43a1-927b-02cac798ff1e-combined-ca-bundle\") pod \"ecb30c4d-e441-43a1-927b-02cac798ff1e\" (UID: \"ecb30c4d-e441-43a1-927b-02cac798ff1e\") " Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.284598 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecb30c4d-e441-43a1-927b-02cac798ff1e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ecb30c4d-e441-43a1-927b-02cac798ff1e" (UID: "ecb30c4d-e441-43a1-927b-02cac798ff1e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.286353 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecb30c4d-e441-43a1-927b-02cac798ff1e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ecb30c4d-e441-43a1-927b-02cac798ff1e" (UID: "ecb30c4d-e441-43a1-927b-02cac798ff1e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.296197 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecb30c4d-e441-43a1-927b-02cac798ff1e-kube-api-access-bv85p" (OuterVolumeSpecName: "kube-api-access-bv85p") pod "ecb30c4d-e441-43a1-927b-02cac798ff1e" (UID: "ecb30c4d-e441-43a1-927b-02cac798ff1e"). InnerVolumeSpecName "kube-api-access-bv85p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.309812 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecb30c4d-e441-43a1-927b-02cac798ff1e-scripts" (OuterVolumeSpecName: "scripts") pod "ecb30c4d-e441-43a1-927b-02cac798ff1e" (UID: "ecb30c4d-e441-43a1-927b-02cac798ff1e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.312519 4812 scope.go:117] "RemoveContainer" containerID="41fee1f53c185589a9144a499b0e6b7be154d2adde33db6c95554e53f86a495f" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.349356 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecb30c4d-e441-43a1-927b-02cac798ff1e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ecb30c4d-e441-43a1-927b-02cac798ff1e" (UID: "ecb30c4d-e441-43a1-927b-02cac798ff1e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.392923 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecb30c4d-e441-43a1-927b-02cac798ff1e-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "ecb30c4d-e441-43a1-927b-02cac798ff1e" (UID: "ecb30c4d-e441-43a1-927b-02cac798ff1e"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.393102 4812 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ecb30c4d-e441-43a1-927b-02cac798ff1e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.393147 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bv85p\" (UniqueName: \"kubernetes.io/projected/ecb30c4d-e441-43a1-927b-02cac798ff1e-kube-api-access-bv85p\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.393162 4812 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecb30c4d-e441-43a1-927b-02cac798ff1e-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.393179 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecb30c4d-e441-43a1-927b-02cac798ff1e-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.393189 4812 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecb30c4d-e441-43a1-927b-02cac798ff1e-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.454389 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecb30c4d-e441-43a1-927b-02cac798ff1e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ecb30c4d-e441-43a1-927b-02cac798ff1e" (UID: "ecb30c4d-e441-43a1-927b-02cac798ff1e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.484306 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecb30c4d-e441-43a1-927b-02cac798ff1e-config-data" (OuterVolumeSpecName: "config-data") pod "ecb30c4d-e441-43a1-927b-02cac798ff1e" (UID: "ecb30c4d-e441-43a1-927b-02cac798ff1e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.496317 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecb30c4d-e441-43a1-927b-02cac798ff1e-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.496387 4812 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecb30c4d-e441-43a1-927b-02cac798ff1e-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.496409 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecb30c4d-e441-43a1-927b-02cac798ff1e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.504488 4812 scope.go:117] "RemoveContainer" containerID="7ed8f4b98868bb9d2fd1bf6750ce2628782f78bb4f462c4c1fba86bcbde3da30" Feb 16 13:58:12 crc kubenswrapper[4812]: E0216 13:58:12.505412 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ed8f4b98868bb9d2fd1bf6750ce2628782f78bb4f462c4c1fba86bcbde3da30\": container with ID starting with 7ed8f4b98868bb9d2fd1bf6750ce2628782f78bb4f462c4c1fba86bcbde3da30 not found: ID does not exist" containerID="7ed8f4b98868bb9d2fd1bf6750ce2628782f78bb4f462c4c1fba86bcbde3da30" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.505516 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ed8f4b98868bb9d2fd1bf6750ce2628782f78bb4f462c4c1fba86bcbde3da30"} err="failed to get container status \"7ed8f4b98868bb9d2fd1bf6750ce2628782f78bb4f462c4c1fba86bcbde3da30\": rpc error: code = NotFound desc = could not find container \"7ed8f4b98868bb9d2fd1bf6750ce2628782f78bb4f462c4c1fba86bcbde3da30\": container with ID starting with 7ed8f4b98868bb9d2fd1bf6750ce2628782f78bb4f462c4c1fba86bcbde3da30 not found: ID does not exist" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.505576 4812 scope.go:117] "RemoveContainer" containerID="cf213b63078bcce8a99f63e0a093c34e70158cd718c4fec6443287a567bfb7b5" Feb 16 13:58:12 crc kubenswrapper[4812]: E0216 13:58:12.506363 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf213b63078bcce8a99f63e0a093c34e70158cd718c4fec6443287a567bfb7b5\": container with ID starting with cf213b63078bcce8a99f63e0a093c34e70158cd718c4fec6443287a567bfb7b5 not found: ID does not exist" containerID="cf213b63078bcce8a99f63e0a093c34e70158cd718c4fec6443287a567bfb7b5" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.506424 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf213b63078bcce8a99f63e0a093c34e70158cd718c4fec6443287a567bfb7b5"} err="failed to get container status \"cf213b63078bcce8a99f63e0a093c34e70158cd718c4fec6443287a567bfb7b5\": rpc error: code = NotFound desc = could not find container \"cf213b63078bcce8a99f63e0a093c34e70158cd718c4fec6443287a567bfb7b5\": container with ID starting with cf213b63078bcce8a99f63e0a093c34e70158cd718c4fec6443287a567bfb7b5 not found: ID does not exist" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.506466 4812 scope.go:117] "RemoveContainer" containerID="0dad5788b2a9470177df7ea74340e8bde4257ddd6ae199f4731757fa459e7a2c" Feb 16 13:58:12 crc kubenswrapper[4812]: E0216 13:58:12.507275 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dad5788b2a9470177df7ea74340e8bde4257ddd6ae199f4731757fa459e7a2c\": container with ID starting with 0dad5788b2a9470177df7ea74340e8bde4257ddd6ae199f4731757fa459e7a2c not found: ID does not exist" containerID="0dad5788b2a9470177df7ea74340e8bde4257ddd6ae199f4731757fa459e7a2c" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.507346 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dad5788b2a9470177df7ea74340e8bde4257ddd6ae199f4731757fa459e7a2c"} err="failed to get container status \"0dad5788b2a9470177df7ea74340e8bde4257ddd6ae199f4731757fa459e7a2c\": rpc error: code = NotFound desc = could not find container \"0dad5788b2a9470177df7ea74340e8bde4257ddd6ae199f4731757fa459e7a2c\": container with ID starting with 0dad5788b2a9470177df7ea74340e8bde4257ddd6ae199f4731757fa459e7a2c not found: ID does not exist" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.507390 4812 scope.go:117] "RemoveContainer" containerID="41fee1f53c185589a9144a499b0e6b7be154d2adde33db6c95554e53f86a495f" Feb 16 13:58:12 crc kubenswrapper[4812]: E0216 13:58:12.508194 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41fee1f53c185589a9144a499b0e6b7be154d2adde33db6c95554e53f86a495f\": container with ID starting with 41fee1f53c185589a9144a499b0e6b7be154d2adde33db6c95554e53f86a495f not found: ID does not exist" containerID="41fee1f53c185589a9144a499b0e6b7be154d2adde33db6c95554e53f86a495f" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.508258 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41fee1f53c185589a9144a499b0e6b7be154d2adde33db6c95554e53f86a495f"} err="failed to get container status \"41fee1f53c185589a9144a499b0e6b7be154d2adde33db6c95554e53f86a495f\": rpc error: code = NotFound desc = could not find container \"41fee1f53c185589a9144a499b0e6b7be154d2adde33db6c95554e53f86a495f\": container with ID starting with 41fee1f53c185589a9144a499b0e6b7be154d2adde33db6c95554e53f86a495f not found: ID does not exist" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.508303 4812 scope.go:117] "RemoveContainer" containerID="7ed8f4b98868bb9d2fd1bf6750ce2628782f78bb4f462c4c1fba86bcbde3da30" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.508789 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ed8f4b98868bb9d2fd1bf6750ce2628782f78bb4f462c4c1fba86bcbde3da30"} err="failed to get container status \"7ed8f4b98868bb9d2fd1bf6750ce2628782f78bb4f462c4c1fba86bcbde3da30\": rpc error: code = NotFound desc = could not find container \"7ed8f4b98868bb9d2fd1bf6750ce2628782f78bb4f462c4c1fba86bcbde3da30\": container with ID starting with 7ed8f4b98868bb9d2fd1bf6750ce2628782f78bb4f462c4c1fba86bcbde3da30 not found: ID does not exist" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.508823 4812 scope.go:117] "RemoveContainer" containerID="cf213b63078bcce8a99f63e0a093c34e70158cd718c4fec6443287a567bfb7b5" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.509222 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf213b63078bcce8a99f63e0a093c34e70158cd718c4fec6443287a567bfb7b5"} err="failed to get container status \"cf213b63078bcce8a99f63e0a093c34e70158cd718c4fec6443287a567bfb7b5\": rpc error: code = NotFound desc = could not find container \"cf213b63078bcce8a99f63e0a093c34e70158cd718c4fec6443287a567bfb7b5\": container with ID starting with cf213b63078bcce8a99f63e0a093c34e70158cd718c4fec6443287a567bfb7b5 not found: ID does not exist" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.509258 4812 scope.go:117] "RemoveContainer" containerID="0dad5788b2a9470177df7ea74340e8bde4257ddd6ae199f4731757fa459e7a2c" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.509830 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dad5788b2a9470177df7ea74340e8bde4257ddd6ae199f4731757fa459e7a2c"} err="failed to get container status \"0dad5788b2a9470177df7ea74340e8bde4257ddd6ae199f4731757fa459e7a2c\": rpc error: code = NotFound desc = could not find container \"0dad5788b2a9470177df7ea74340e8bde4257ddd6ae199f4731757fa459e7a2c\": container with ID starting with 0dad5788b2a9470177df7ea74340e8bde4257ddd6ae199f4731757fa459e7a2c not found: ID does not exist" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.509872 4812 scope.go:117] "RemoveContainer" containerID="41fee1f53c185589a9144a499b0e6b7be154d2adde33db6c95554e53f86a495f" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.510365 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41fee1f53c185589a9144a499b0e6b7be154d2adde33db6c95554e53f86a495f"} err="failed to get container status \"41fee1f53c185589a9144a499b0e6b7be154d2adde33db6c95554e53f86a495f\": rpc error: code = NotFound desc = could not find container \"41fee1f53c185589a9144a499b0e6b7be154d2adde33db6c95554e53f86a495f\": container with ID starting with 41fee1f53c185589a9144a499b0e6b7be154d2adde33db6c95554e53f86a495f not found: ID does not exist" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.801951 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.816863 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.835663 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:58:12 crc kubenswrapper[4812]: E0216 13:58:12.836343 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecb30c4d-e441-43a1-927b-02cac798ff1e" containerName="sg-core" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.836366 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecb30c4d-e441-43a1-927b-02cac798ff1e" containerName="sg-core" Feb 16 13:58:12 crc kubenswrapper[4812]: E0216 13:58:12.836410 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecb30c4d-e441-43a1-927b-02cac798ff1e" containerName="ceilometer-central-agent" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.836418 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecb30c4d-e441-43a1-927b-02cac798ff1e" containerName="ceilometer-central-agent" Feb 16 13:58:12 crc kubenswrapper[4812]: E0216 13:58:12.836459 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecb30c4d-e441-43a1-927b-02cac798ff1e" containerName="proxy-httpd" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.836466 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecb30c4d-e441-43a1-927b-02cac798ff1e" containerName="proxy-httpd" Feb 16 13:58:12 crc kubenswrapper[4812]: E0216 13:58:12.836474 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecb30c4d-e441-43a1-927b-02cac798ff1e" containerName="ceilometer-notification-agent" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.836481 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecb30c4d-e441-43a1-927b-02cac798ff1e" containerName="ceilometer-notification-agent" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.836736 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecb30c4d-e441-43a1-927b-02cac798ff1e" containerName="ceilometer-notification-agent" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.836750 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecb30c4d-e441-43a1-927b-02cac798ff1e" containerName="ceilometer-central-agent" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.836774 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecb30c4d-e441-43a1-927b-02cac798ff1e" containerName="proxy-httpd" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.836786 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecb30c4d-e441-43a1-927b-02cac798ff1e" containerName="sg-core" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.839267 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.842270 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.844131 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.844321 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.858976 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.907776 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/615b900f-05af-44e8-90c2-c9617ee55578-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"615b900f-05af-44e8-90c2-c9617ee55578\") " pod="openstack/ceilometer-0" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.907890 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/615b900f-05af-44e8-90c2-c9617ee55578-run-httpd\") pod \"ceilometer-0\" (UID: \"615b900f-05af-44e8-90c2-c9617ee55578\") " pod="openstack/ceilometer-0" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.908301 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/615b900f-05af-44e8-90c2-c9617ee55578-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"615b900f-05af-44e8-90c2-c9617ee55578\") " pod="openstack/ceilometer-0" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.908541 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/615b900f-05af-44e8-90c2-c9617ee55578-scripts\") pod \"ceilometer-0\" (UID: \"615b900f-05af-44e8-90c2-c9617ee55578\") " pod="openstack/ceilometer-0" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.909067 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/615b900f-05af-44e8-90c2-c9617ee55578-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"615b900f-05af-44e8-90c2-c9617ee55578\") " pod="openstack/ceilometer-0" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.912303 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/615b900f-05af-44e8-90c2-c9617ee55578-log-httpd\") pod \"ceilometer-0\" (UID: \"615b900f-05af-44e8-90c2-c9617ee55578\") " pod="openstack/ceilometer-0" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.912631 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djxtm\" (UniqueName: \"kubernetes.io/projected/615b900f-05af-44e8-90c2-c9617ee55578-kube-api-access-djxtm\") pod \"ceilometer-0\" (UID: \"615b900f-05af-44e8-90c2-c9617ee55578\") " pod="openstack/ceilometer-0" Feb 16 13:58:12 crc kubenswrapper[4812]: I0216 13:58:12.912768 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/615b900f-05af-44e8-90c2-c9617ee55578-config-data\") pod \"ceilometer-0\" (UID: \"615b900f-05af-44e8-90c2-c9617ee55578\") " pod="openstack/ceilometer-0" Feb 16 13:58:13 crc kubenswrapper[4812]: I0216 13:58:13.015975 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/615b900f-05af-44e8-90c2-c9617ee55578-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"615b900f-05af-44e8-90c2-c9617ee55578\") " pod="openstack/ceilometer-0" Feb 16 13:58:13 crc kubenswrapper[4812]: I0216 13:58:13.016654 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/615b900f-05af-44e8-90c2-c9617ee55578-scripts\") pod \"ceilometer-0\" (UID: \"615b900f-05af-44e8-90c2-c9617ee55578\") " pod="openstack/ceilometer-0" Feb 16 13:58:13 crc kubenswrapper[4812]: I0216 13:58:13.016919 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/615b900f-05af-44e8-90c2-c9617ee55578-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"615b900f-05af-44e8-90c2-c9617ee55578\") " pod="openstack/ceilometer-0" Feb 16 13:58:13 crc kubenswrapper[4812]: I0216 13:58:13.017099 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/615b900f-05af-44e8-90c2-c9617ee55578-log-httpd\") pod \"ceilometer-0\" (UID: \"615b900f-05af-44e8-90c2-c9617ee55578\") " pod="openstack/ceilometer-0" Feb 16 13:58:13 crc kubenswrapper[4812]: I0216 13:58:13.017230 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djxtm\" (UniqueName: \"kubernetes.io/projected/615b900f-05af-44e8-90c2-c9617ee55578-kube-api-access-djxtm\") pod \"ceilometer-0\" (UID: \"615b900f-05af-44e8-90c2-c9617ee55578\") " pod="openstack/ceilometer-0" Feb 16 13:58:13 crc kubenswrapper[4812]: I0216 13:58:13.017336 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/615b900f-05af-44e8-90c2-c9617ee55578-config-data\") pod \"ceilometer-0\" (UID: \"615b900f-05af-44e8-90c2-c9617ee55578\") " pod="openstack/ceilometer-0" Feb 16 13:58:13 crc kubenswrapper[4812]: I0216 13:58:13.017503 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/615b900f-05af-44e8-90c2-c9617ee55578-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"615b900f-05af-44e8-90c2-c9617ee55578\") " pod="openstack/ceilometer-0" Feb 16 13:58:13 crc kubenswrapper[4812]: I0216 13:58:13.017638 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/615b900f-05af-44e8-90c2-c9617ee55578-run-httpd\") pod \"ceilometer-0\" (UID: \"615b900f-05af-44e8-90c2-c9617ee55578\") " pod="openstack/ceilometer-0" Feb 16 13:58:13 crc kubenswrapper[4812]: I0216 13:58:13.018215 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/615b900f-05af-44e8-90c2-c9617ee55578-log-httpd\") pod \"ceilometer-0\" (UID: \"615b900f-05af-44e8-90c2-c9617ee55578\") " pod="openstack/ceilometer-0" Feb 16 13:58:13 crc kubenswrapper[4812]: I0216 13:58:13.018356 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/615b900f-05af-44e8-90c2-c9617ee55578-run-httpd\") pod \"ceilometer-0\" (UID: \"615b900f-05af-44e8-90c2-c9617ee55578\") " pod="openstack/ceilometer-0" Feb 16 13:58:13 crc kubenswrapper[4812]: I0216 13:58:13.022063 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/615b900f-05af-44e8-90c2-c9617ee55578-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"615b900f-05af-44e8-90c2-c9617ee55578\") " pod="openstack/ceilometer-0" Feb 16 13:58:13 crc kubenswrapper[4812]: I0216 13:58:13.023514 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/615b900f-05af-44e8-90c2-c9617ee55578-config-data\") pod \"ceilometer-0\" (UID: \"615b900f-05af-44e8-90c2-c9617ee55578\") " pod="openstack/ceilometer-0" Feb 16 13:58:13 crc kubenswrapper[4812]: I0216 13:58:13.026054 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/615b900f-05af-44e8-90c2-c9617ee55578-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"615b900f-05af-44e8-90c2-c9617ee55578\") " pod="openstack/ceilometer-0" Feb 16 13:58:13 crc kubenswrapper[4812]: I0216 13:58:13.026214 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/615b900f-05af-44e8-90c2-c9617ee55578-scripts\") pod \"ceilometer-0\" (UID: \"615b900f-05af-44e8-90c2-c9617ee55578\") " pod="openstack/ceilometer-0" Feb 16 13:58:13 crc kubenswrapper[4812]: I0216 13:58:13.028645 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/615b900f-05af-44e8-90c2-c9617ee55578-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"615b900f-05af-44e8-90c2-c9617ee55578\") " pod="openstack/ceilometer-0" Feb 16 13:58:13 crc kubenswrapper[4812]: I0216 13:58:13.044032 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djxtm\" (UniqueName: \"kubernetes.io/projected/615b900f-05af-44e8-90c2-c9617ee55578-kube-api-access-djxtm\") pod \"ceilometer-0\" (UID: \"615b900f-05af-44e8-90c2-c9617ee55578\") " pod="openstack/ceilometer-0" Feb 16 13:58:13 crc kubenswrapper[4812]: I0216 13:58:13.187055 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:58:13 crc kubenswrapper[4812]: I0216 13:58:13.191213 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:58:13 crc kubenswrapper[4812]: I0216 13:58:13.741175 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:58:13 crc kubenswrapper[4812]: W0216 13:58:13.748976 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod615b900f_05af_44e8_90c2_c9617ee55578.slice/crio-85f46dde410e65137f082b002ba7b147045c469e91c9217f11898c9210a01675 WatchSource:0}: Error finding container 85f46dde410e65137f082b002ba7b147045c469e91c9217f11898c9210a01675: Status 404 returned error can't find the container with id 85f46dde410e65137f082b002ba7b147045c469e91c9217f11898c9210a01675 Feb 16 13:58:13 crc kubenswrapper[4812]: I0216 13:58:13.895270 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecb30c4d-e441-43a1-927b-02cac798ff1e" path="/var/lib/kubelet/pods/ecb30c4d-e441-43a1-927b-02cac798ff1e/volumes" Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.069184 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.152821 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7af325ac-63cf-45be-8fd3-564597076853-combined-ca-bundle\") pod \"7af325ac-63cf-45be-8fd3-564597076853\" (UID: \"7af325ac-63cf-45be-8fd3-564597076853\") " Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.152945 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7af325ac-63cf-45be-8fd3-564597076853-logs\") pod \"7af325ac-63cf-45be-8fd3-564597076853\" (UID: \"7af325ac-63cf-45be-8fd3-564597076853\") " Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.153091 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n22sw\" (UniqueName: \"kubernetes.io/projected/7af325ac-63cf-45be-8fd3-564597076853-kube-api-access-n22sw\") pod \"7af325ac-63cf-45be-8fd3-564597076853\" (UID: \"7af325ac-63cf-45be-8fd3-564597076853\") " Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.153213 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7af325ac-63cf-45be-8fd3-564597076853-config-data\") pod \"7af325ac-63cf-45be-8fd3-564597076853\" (UID: \"7af325ac-63cf-45be-8fd3-564597076853\") " Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.155484 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7af325ac-63cf-45be-8fd3-564597076853-logs" (OuterVolumeSpecName: "logs") pod "7af325ac-63cf-45be-8fd3-564597076853" (UID: "7af325ac-63cf-45be-8fd3-564597076853"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.169776 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7af325ac-63cf-45be-8fd3-564597076853-kube-api-access-n22sw" (OuterVolumeSpecName: "kube-api-access-n22sw") pod "7af325ac-63cf-45be-8fd3-564597076853" (UID: "7af325ac-63cf-45be-8fd3-564597076853"). InnerVolumeSpecName "kube-api-access-n22sw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.255250 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7af325ac-63cf-45be-8fd3-564597076853-config-data" (OuterVolumeSpecName: "config-data") pod "7af325ac-63cf-45be-8fd3-564597076853" (UID: "7af325ac-63cf-45be-8fd3-564597076853"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.257381 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7af325ac-63cf-45be-8fd3-564597076853-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.257427 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7af325ac-63cf-45be-8fd3-564597076853-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.267321 4812 generic.go:334] "Generic (PLEG): container finished" podID="7af325ac-63cf-45be-8fd3-564597076853" containerID="a61ef979f04212710d67616625e45a5ebdca870badc18af177201a8cd692200f" exitCode=0 Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.267576 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.270347 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7af325ac-63cf-45be-8fd3-564597076853","Type":"ContainerDied","Data":"a61ef979f04212710d67616625e45a5ebdca870badc18af177201a8cd692200f"} Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.270425 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7af325ac-63cf-45be-8fd3-564597076853","Type":"ContainerDied","Data":"4f10136fe9bcc532aad846a8acbb11f951d91b74ad1ec20fc98881619640f37b"} Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.272307 4812 scope.go:117] "RemoveContainer" containerID="a61ef979f04212710d67616625e45a5ebdca870badc18af177201a8cd692200f" Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.274688 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n22sw\" (UniqueName: \"kubernetes.io/projected/7af325ac-63cf-45be-8fd3-564597076853-kube-api-access-n22sw\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.276950 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"615b900f-05af-44e8-90c2-c9617ee55578","Type":"ContainerStarted","Data":"85f46dde410e65137f082b002ba7b147045c469e91c9217f11898c9210a01675"} Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.313950 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7af325ac-63cf-45be-8fd3-564597076853-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7af325ac-63cf-45be-8fd3-564597076853" (UID: "7af325ac-63cf-45be-8fd3-564597076853"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.380062 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7af325ac-63cf-45be-8fd3-564597076853-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.444855 4812 scope.go:117] "RemoveContainer" containerID="469d9ba20dc1f86770af02f2bc1c8495552dd556a9aff1be43e929870af6cd78" Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.474656 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.576805 4812 patch_prober.go:28] interesting pod/machine-config-daemon-c6mn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.577326 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.577536 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.599350 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fe1a8ada12fd81917eb3f115f0ed57838961e98daadca433ae78ae9036026fef"} pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.599992 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" containerID="cri-o://fe1a8ada12fd81917eb3f115f0ed57838961e98daadca433ae78ae9036026fef" gracePeriod=600 Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.653692 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.665988 4812 scope.go:117] "RemoveContainer" containerID="a61ef979f04212710d67616625e45a5ebdca870badc18af177201a8cd692200f" Feb 16 13:58:14 crc kubenswrapper[4812]: E0216 13:58:14.671510 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a61ef979f04212710d67616625e45a5ebdca870badc18af177201a8cd692200f\": container with ID starting with a61ef979f04212710d67616625e45a5ebdca870badc18af177201a8cd692200f not found: ID does not exist" containerID="a61ef979f04212710d67616625e45a5ebdca870badc18af177201a8cd692200f" Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.671584 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a61ef979f04212710d67616625e45a5ebdca870badc18af177201a8cd692200f"} err="failed to get container status \"a61ef979f04212710d67616625e45a5ebdca870badc18af177201a8cd692200f\": rpc error: code = NotFound desc = could not find container \"a61ef979f04212710d67616625e45a5ebdca870badc18af177201a8cd692200f\": container with ID starting with a61ef979f04212710d67616625e45a5ebdca870badc18af177201a8cd692200f not found: ID does not exist" Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.671635 4812 scope.go:117] "RemoveContainer" containerID="469d9ba20dc1f86770af02f2bc1c8495552dd556a9aff1be43e929870af6cd78" Feb 16 13:58:14 crc kubenswrapper[4812]: E0216 13:58:14.674573 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"469d9ba20dc1f86770af02f2bc1c8495552dd556a9aff1be43e929870af6cd78\": container with ID starting with 469d9ba20dc1f86770af02f2bc1c8495552dd556a9aff1be43e929870af6cd78 not found: ID does not exist" containerID="469d9ba20dc1f86770af02f2bc1c8495552dd556a9aff1be43e929870af6cd78" Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.674667 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"469d9ba20dc1f86770af02f2bc1c8495552dd556a9aff1be43e929870af6cd78"} err="failed to get container status \"469d9ba20dc1f86770af02f2bc1c8495552dd556a9aff1be43e929870af6cd78\": rpc error: code = NotFound desc = could not find container \"469d9ba20dc1f86770af02f2bc1c8495552dd556a9aff1be43e929870af6cd78\": container with ID starting with 469d9ba20dc1f86770af02f2bc1c8495552dd556a9aff1be43e929870af6cd78 not found: ID does not exist" Feb 16 13:58:14 crc kubenswrapper[4812]: E0216 13:58:14.741036 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.747410 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.789986 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.822584 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 13:58:14 crc kubenswrapper[4812]: E0216 13:58:14.824247 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7af325ac-63cf-45be-8fd3-564597076853" containerName="nova-api-log" Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.824405 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="7af325ac-63cf-45be-8fd3-564597076853" containerName="nova-api-log" Feb 16 13:58:14 crc kubenswrapper[4812]: E0216 13:58:14.824755 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7af325ac-63cf-45be-8fd3-564597076853" containerName="nova-api-api" Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.824851 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="7af325ac-63cf-45be-8fd3-564597076853" containerName="nova-api-api" Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.825195 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="7af325ac-63cf-45be-8fd3-564597076853" containerName="nova-api-api" Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.825290 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="7af325ac-63cf-45be-8fd3-564597076853" containerName="nova-api-log" Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.828568 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.834759 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.834912 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.835131 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.846660 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.932304 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45ac10bb-0132-41c8-9f99-4c5a266ece13-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"45ac10bb-0132-41c8-9f99-4c5a266ece13\") " pod="openstack/nova-api-0" Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.932387 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45ac10bb-0132-41c8-9f99-4c5a266ece13-config-data\") pod \"nova-api-0\" (UID: \"45ac10bb-0132-41c8-9f99-4c5a266ece13\") " pod="openstack/nova-api-0" Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.933019 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45ac10bb-0132-41c8-9f99-4c5a266ece13-internal-tls-certs\") pod \"nova-api-0\" (UID: \"45ac10bb-0132-41c8-9f99-4c5a266ece13\") " pod="openstack/nova-api-0" Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.933217 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvtxb\" (UniqueName: \"kubernetes.io/projected/45ac10bb-0132-41c8-9f99-4c5a266ece13-kube-api-access-tvtxb\") pod \"nova-api-0\" (UID: \"45ac10bb-0132-41c8-9f99-4c5a266ece13\") " pod="openstack/nova-api-0" Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.933635 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45ac10bb-0132-41c8-9f99-4c5a266ece13-public-tls-certs\") pod \"nova-api-0\" (UID: \"45ac10bb-0132-41c8-9f99-4c5a266ece13\") " pod="openstack/nova-api-0" Feb 16 13:58:14 crc kubenswrapper[4812]: I0216 13:58:14.933697 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45ac10bb-0132-41c8-9f99-4c5a266ece13-logs\") pod \"nova-api-0\" (UID: \"45ac10bb-0132-41c8-9f99-4c5a266ece13\") " pod="openstack/nova-api-0" Feb 16 13:58:15 crc kubenswrapper[4812]: I0216 13:58:15.036788 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45ac10bb-0132-41c8-9f99-4c5a266ece13-internal-tls-certs\") pod \"nova-api-0\" (UID: \"45ac10bb-0132-41c8-9f99-4c5a266ece13\") " pod="openstack/nova-api-0" Feb 16 13:58:15 crc kubenswrapper[4812]: I0216 13:58:15.036889 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvtxb\" (UniqueName: \"kubernetes.io/projected/45ac10bb-0132-41c8-9f99-4c5a266ece13-kube-api-access-tvtxb\") pod \"nova-api-0\" (UID: \"45ac10bb-0132-41c8-9f99-4c5a266ece13\") " pod="openstack/nova-api-0" Feb 16 13:58:15 crc kubenswrapper[4812]: I0216 13:58:15.037035 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45ac10bb-0132-41c8-9f99-4c5a266ece13-public-tls-certs\") pod \"nova-api-0\" (UID: \"45ac10bb-0132-41c8-9f99-4c5a266ece13\") " pod="openstack/nova-api-0" Feb 16 13:58:15 crc kubenswrapper[4812]: I0216 13:58:15.037067 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45ac10bb-0132-41c8-9f99-4c5a266ece13-logs\") pod \"nova-api-0\" (UID: \"45ac10bb-0132-41c8-9f99-4c5a266ece13\") " pod="openstack/nova-api-0" Feb 16 13:58:15 crc kubenswrapper[4812]: I0216 13:58:15.037185 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45ac10bb-0132-41c8-9f99-4c5a266ece13-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"45ac10bb-0132-41c8-9f99-4c5a266ece13\") " pod="openstack/nova-api-0" Feb 16 13:58:15 crc kubenswrapper[4812]: I0216 13:58:15.037253 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45ac10bb-0132-41c8-9f99-4c5a266ece13-config-data\") pod \"nova-api-0\" (UID: \"45ac10bb-0132-41c8-9f99-4c5a266ece13\") " pod="openstack/nova-api-0" Feb 16 13:58:15 crc kubenswrapper[4812]: I0216 13:58:15.038864 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45ac10bb-0132-41c8-9f99-4c5a266ece13-logs\") pod \"nova-api-0\" (UID: \"45ac10bb-0132-41c8-9f99-4c5a266ece13\") " pod="openstack/nova-api-0" Feb 16 13:58:15 crc kubenswrapper[4812]: I0216 13:58:15.044400 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45ac10bb-0132-41c8-9f99-4c5a266ece13-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"45ac10bb-0132-41c8-9f99-4c5a266ece13\") " pod="openstack/nova-api-0" Feb 16 13:58:15 crc kubenswrapper[4812]: I0216 13:58:15.044900 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45ac10bb-0132-41c8-9f99-4c5a266ece13-config-data\") pod \"nova-api-0\" (UID: \"45ac10bb-0132-41c8-9f99-4c5a266ece13\") " pod="openstack/nova-api-0" Feb 16 13:58:15 crc kubenswrapper[4812]: I0216 13:58:15.047218 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45ac10bb-0132-41c8-9f99-4c5a266ece13-public-tls-certs\") pod \"nova-api-0\" (UID: \"45ac10bb-0132-41c8-9f99-4c5a266ece13\") " pod="openstack/nova-api-0" Feb 16 13:58:15 crc kubenswrapper[4812]: I0216 13:58:15.051346 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45ac10bb-0132-41c8-9f99-4c5a266ece13-internal-tls-certs\") pod \"nova-api-0\" (UID: \"45ac10bb-0132-41c8-9f99-4c5a266ece13\") " pod="openstack/nova-api-0" Feb 16 13:58:15 crc kubenswrapper[4812]: I0216 13:58:15.060048 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvtxb\" (UniqueName: \"kubernetes.io/projected/45ac10bb-0132-41c8-9f99-4c5a266ece13-kube-api-access-tvtxb\") pod \"nova-api-0\" (UID: \"45ac10bb-0132-41c8-9f99-4c5a266ece13\") " pod="openstack/nova-api-0" Feb 16 13:58:15 crc kubenswrapper[4812]: I0216 13:58:15.174996 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 13:58:15 crc kubenswrapper[4812]: I0216 13:58:15.322154 4812 generic.go:334] "Generic (PLEG): container finished" podID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerID="fe1a8ada12fd81917eb3f115f0ed57838961e98daadca433ae78ae9036026fef" exitCode=0 Feb 16 13:58:15 crc kubenswrapper[4812]: I0216 13:58:15.322857 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" event={"ID":"3c55e49a-a30d-4950-a690-c33d9f8a31e0","Type":"ContainerDied","Data":"fe1a8ada12fd81917eb3f115f0ed57838961e98daadca433ae78ae9036026fef"} Feb 16 13:58:15 crc kubenswrapper[4812]: I0216 13:58:15.322963 4812 scope.go:117] "RemoveContainer" containerID="e326161e933a75a00a9297a9e1cbd3d6a1ed2f661892851e02b5e7109aebd29d" Feb 16 13:58:15 crc kubenswrapper[4812]: I0216 13:58:15.324436 4812 scope.go:117] "RemoveContainer" containerID="fe1a8ada12fd81917eb3f115f0ed57838961e98daadca433ae78ae9036026fef" Feb 16 13:58:15 crc kubenswrapper[4812]: E0216 13:58:15.325022 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 13:58:15 crc kubenswrapper[4812]: I0216 13:58:15.344883 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"615b900f-05af-44e8-90c2-c9617ee55578","Type":"ContainerStarted","Data":"b4e8d2a313895d1778c8d413324e0aba3725237d59482af1de7859fa510d3210"} Feb 16 13:58:15 crc kubenswrapper[4812]: I0216 13:58:15.382783 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 16 13:58:16 crc kubenswrapper[4812]: I0216 13:58:15.640874 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-p7tcs"] Feb 16 13:58:16 crc kubenswrapper[4812]: I0216 13:58:15.643929 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-p7tcs" Feb 16 13:58:16 crc kubenswrapper[4812]: I0216 13:58:15.648207 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 16 13:58:16 crc kubenswrapper[4812]: I0216 13:58:15.648864 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 16 13:58:16 crc kubenswrapper[4812]: I0216 13:58:15.668613 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-p7tcs"] Feb 16 13:58:16 crc kubenswrapper[4812]: I0216 13:58:15.762753 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c77cba8e-f37e-4a5f-a795-13999695c004-config-data\") pod \"nova-cell1-cell-mapping-p7tcs\" (UID: \"c77cba8e-f37e-4a5f-a795-13999695c004\") " pod="openstack/nova-cell1-cell-mapping-p7tcs" Feb 16 13:58:16 crc kubenswrapper[4812]: I0216 13:58:15.762986 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8v78\" (UniqueName: \"kubernetes.io/projected/c77cba8e-f37e-4a5f-a795-13999695c004-kube-api-access-v8v78\") pod \"nova-cell1-cell-mapping-p7tcs\" (UID: \"c77cba8e-f37e-4a5f-a795-13999695c004\") " pod="openstack/nova-cell1-cell-mapping-p7tcs" Feb 16 13:58:16 crc kubenswrapper[4812]: I0216 13:58:15.763037 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c77cba8e-f37e-4a5f-a795-13999695c004-scripts\") pod \"nova-cell1-cell-mapping-p7tcs\" (UID: \"c77cba8e-f37e-4a5f-a795-13999695c004\") " pod="openstack/nova-cell1-cell-mapping-p7tcs" Feb 16 13:58:16 crc kubenswrapper[4812]: I0216 13:58:15.763280 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c77cba8e-f37e-4a5f-a795-13999695c004-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-p7tcs\" (UID: \"c77cba8e-f37e-4a5f-a795-13999695c004\") " pod="openstack/nova-cell1-cell-mapping-p7tcs" Feb 16 13:58:16 crc kubenswrapper[4812]: I0216 13:58:15.867344 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c77cba8e-f37e-4a5f-a795-13999695c004-config-data\") pod \"nova-cell1-cell-mapping-p7tcs\" (UID: \"c77cba8e-f37e-4a5f-a795-13999695c004\") " pod="openstack/nova-cell1-cell-mapping-p7tcs" Feb 16 13:58:16 crc kubenswrapper[4812]: I0216 13:58:15.867544 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8v78\" (UniqueName: \"kubernetes.io/projected/c77cba8e-f37e-4a5f-a795-13999695c004-kube-api-access-v8v78\") pod \"nova-cell1-cell-mapping-p7tcs\" (UID: \"c77cba8e-f37e-4a5f-a795-13999695c004\") " pod="openstack/nova-cell1-cell-mapping-p7tcs" Feb 16 13:58:16 crc kubenswrapper[4812]: I0216 13:58:15.867605 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c77cba8e-f37e-4a5f-a795-13999695c004-scripts\") pod \"nova-cell1-cell-mapping-p7tcs\" (UID: \"c77cba8e-f37e-4a5f-a795-13999695c004\") " pod="openstack/nova-cell1-cell-mapping-p7tcs" Feb 16 13:58:16 crc kubenswrapper[4812]: I0216 13:58:15.867693 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c77cba8e-f37e-4a5f-a795-13999695c004-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-p7tcs\" (UID: \"c77cba8e-f37e-4a5f-a795-13999695c004\") " pod="openstack/nova-cell1-cell-mapping-p7tcs" Feb 16 13:58:16 crc kubenswrapper[4812]: I0216 13:58:15.877057 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c77cba8e-f37e-4a5f-a795-13999695c004-scripts\") pod \"nova-cell1-cell-mapping-p7tcs\" (UID: \"c77cba8e-f37e-4a5f-a795-13999695c004\") " pod="openstack/nova-cell1-cell-mapping-p7tcs" Feb 16 13:58:16 crc kubenswrapper[4812]: I0216 13:58:15.877208 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c77cba8e-f37e-4a5f-a795-13999695c004-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-p7tcs\" (UID: \"c77cba8e-f37e-4a5f-a795-13999695c004\") " pod="openstack/nova-cell1-cell-mapping-p7tcs" Feb 16 13:58:16 crc kubenswrapper[4812]: I0216 13:58:15.878178 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c77cba8e-f37e-4a5f-a795-13999695c004-config-data\") pod \"nova-cell1-cell-mapping-p7tcs\" (UID: \"c77cba8e-f37e-4a5f-a795-13999695c004\") " pod="openstack/nova-cell1-cell-mapping-p7tcs" Feb 16 13:58:16 crc kubenswrapper[4812]: I0216 13:58:15.892555 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8v78\" (UniqueName: \"kubernetes.io/projected/c77cba8e-f37e-4a5f-a795-13999695c004-kube-api-access-v8v78\") pod \"nova-cell1-cell-mapping-p7tcs\" (UID: \"c77cba8e-f37e-4a5f-a795-13999695c004\") " pod="openstack/nova-cell1-cell-mapping-p7tcs" Feb 16 13:58:16 crc kubenswrapper[4812]: I0216 13:58:15.901950 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7af325ac-63cf-45be-8fd3-564597076853" path="/var/lib/kubelet/pods/7af325ac-63cf-45be-8fd3-564597076853/volumes" Feb 16 13:58:16 crc kubenswrapper[4812]: I0216 13:58:15.980671 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-p7tcs" Feb 16 13:58:16 crc kubenswrapper[4812]: I0216 13:58:16.399212 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"615b900f-05af-44e8-90c2-c9617ee55578","Type":"ContainerStarted","Data":"e412eb2f1e2f6daac332cee40a2799b14c1cfe01a2b0b12da47c0bb0046ef3fd"} Feb 16 13:58:16 crc kubenswrapper[4812]: I0216 13:58:16.625150 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:58:16 crc kubenswrapper[4812]: I0216 13:58:16.712842 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-p7tcs"] Feb 16 13:58:17 crc kubenswrapper[4812]: I0216 13:58:17.468234 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"45ac10bb-0132-41c8-9f99-4c5a266ece13","Type":"ContainerStarted","Data":"9495d8dee6709ab51431f1ae08f915cc1e416daee59067c145911641531d36cc"} Feb 16 13:58:17 crc kubenswrapper[4812]: I0216 13:58:17.468872 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"45ac10bb-0132-41c8-9f99-4c5a266ece13","Type":"ContainerStarted","Data":"300c5202473b0ce968aad65431a1b75acbbb464462962ba2653f8632fe0356d7"} Feb 16 13:58:17 crc kubenswrapper[4812]: I0216 13:58:17.468894 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"45ac10bb-0132-41c8-9f99-4c5a266ece13","Type":"ContainerStarted","Data":"9277a7f93da99a46de0ef2da93da3f85a78cf696b44ed7ec4ec99b57497e9fe3"} Feb 16 13:58:17 crc kubenswrapper[4812]: I0216 13:58:17.478763 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"615b900f-05af-44e8-90c2-c9617ee55578","Type":"ContainerStarted","Data":"e09f7603dff0c9aee70023b1139406428d2026e9fe233076742028df3d6e0e41"} Feb 16 13:58:17 crc kubenswrapper[4812]: I0216 13:58:17.489252 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-p7tcs" event={"ID":"c77cba8e-f37e-4a5f-a795-13999695c004","Type":"ContainerStarted","Data":"4af6b842f6da140a89f3af5019348860b68855e3bb018ba6d2d7b598b72ca632"} Feb 16 13:58:17 crc kubenswrapper[4812]: I0216 13:58:17.489336 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-p7tcs" event={"ID":"c77cba8e-f37e-4a5f-a795-13999695c004","Type":"ContainerStarted","Data":"94b2d09a2ee007af52e54557bb49ea1d2d4ded76059700389b1090d14bb726ff"} Feb 16 13:58:17 crc kubenswrapper[4812]: I0216 13:58:17.518661 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.518617483 podStartE2EDuration="3.518617483s" podCreationTimestamp="2026-02-16 13:58:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:58:17.513008361 +0000 UTC m=+1586.577339062" watchObservedRunningTime="2026-02-16 13:58:17.518617483 +0000 UTC m=+1586.582948184" Feb 16 13:58:17 crc kubenswrapper[4812]: I0216 13:58:17.541900 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-p7tcs" podStartSLOduration=2.541868236 podStartE2EDuration="2.541868236s" podCreationTimestamp="2026-02-16 13:58:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:58:17.53336077 +0000 UTC m=+1586.597691471" watchObservedRunningTime="2026-02-16 13:58:17.541868236 +0000 UTC m=+1586.606198927" Feb 16 13:58:17 crc kubenswrapper[4812]: I0216 13:58:17.645372 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-6sbj2" Feb 16 13:58:17 crc kubenswrapper[4812]: I0216 13:58:17.785946 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-cqh8x"] Feb 16 13:58:17 crc kubenswrapper[4812]: I0216 13:58:17.786394 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-cqh8x" podUID="a3da896f-2c71-43dc-afdf-6cfc4c1b01ba" containerName="dnsmasq-dns" containerID="cri-o://7b85905d2b7bbcd9668614469409330a5a5526964f70094610fc93361a50a223" gracePeriod=10 Feb 16 13:58:18 crc kubenswrapper[4812]: I0216 13:58:18.496265 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-cqh8x" Feb 16 13:58:18 crc kubenswrapper[4812]: I0216 13:58:18.500223 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3da896f-2c71-43dc-afdf-6cfc4c1b01ba-ovsdbserver-nb\") pod \"a3da896f-2c71-43dc-afdf-6cfc4c1b01ba\" (UID: \"a3da896f-2c71-43dc-afdf-6cfc4c1b01ba\") " Feb 16 13:58:18 crc kubenswrapper[4812]: I0216 13:58:18.500341 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kt7q\" (UniqueName: \"kubernetes.io/projected/a3da896f-2c71-43dc-afdf-6cfc4c1b01ba-kube-api-access-7kt7q\") pod \"a3da896f-2c71-43dc-afdf-6cfc4c1b01ba\" (UID: \"a3da896f-2c71-43dc-afdf-6cfc4c1b01ba\") " Feb 16 13:58:18 crc kubenswrapper[4812]: I0216 13:58:18.500493 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3da896f-2c71-43dc-afdf-6cfc4c1b01ba-dns-svc\") pod \"a3da896f-2c71-43dc-afdf-6cfc4c1b01ba\" (UID: \"a3da896f-2c71-43dc-afdf-6cfc4c1b01ba\") " Feb 16 13:58:18 crc kubenswrapper[4812]: I0216 13:58:18.500526 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3da896f-2c71-43dc-afdf-6cfc4c1b01ba-config\") pod \"a3da896f-2c71-43dc-afdf-6cfc4c1b01ba\" (UID: \"a3da896f-2c71-43dc-afdf-6cfc4c1b01ba\") " Feb 16 13:58:18 crc kubenswrapper[4812]: I0216 13:58:18.500644 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3da896f-2c71-43dc-afdf-6cfc4c1b01ba-ovsdbserver-sb\") pod \"a3da896f-2c71-43dc-afdf-6cfc4c1b01ba\" (UID: \"a3da896f-2c71-43dc-afdf-6cfc4c1b01ba\") " Feb 16 13:58:18 crc kubenswrapper[4812]: I0216 13:58:18.500743 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3da896f-2c71-43dc-afdf-6cfc4c1b01ba-dns-swift-storage-0\") pod \"a3da896f-2c71-43dc-afdf-6cfc4c1b01ba\" (UID: \"a3da896f-2c71-43dc-afdf-6cfc4c1b01ba\") " Feb 16 13:58:18 crc kubenswrapper[4812]: I0216 13:58:18.510062 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3da896f-2c71-43dc-afdf-6cfc4c1b01ba-kube-api-access-7kt7q" (OuterVolumeSpecName: "kube-api-access-7kt7q") pod "a3da896f-2c71-43dc-afdf-6cfc4c1b01ba" (UID: "a3da896f-2c71-43dc-afdf-6cfc4c1b01ba"). InnerVolumeSpecName "kube-api-access-7kt7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:58:18 crc kubenswrapper[4812]: I0216 13:58:18.536565 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"615b900f-05af-44e8-90c2-c9617ee55578","Type":"ContainerStarted","Data":"c8623f81826725d711e442380d213f93d4f6785647d78eb44c03420ce26b4ac8"} Feb 16 13:58:18 crc kubenswrapper[4812]: I0216 13:58:18.536906 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="615b900f-05af-44e8-90c2-c9617ee55578" containerName="ceilometer-central-agent" containerID="cri-o://b4e8d2a313895d1778c8d413324e0aba3725237d59482af1de7859fa510d3210" gracePeriod=30 Feb 16 13:58:18 crc kubenswrapper[4812]: I0216 13:58:18.541155 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 13:58:18 crc kubenswrapper[4812]: I0216 13:58:18.542043 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="615b900f-05af-44e8-90c2-c9617ee55578" containerName="sg-core" containerID="cri-o://e09f7603dff0c9aee70023b1139406428d2026e9fe233076742028df3d6e0e41" gracePeriod=30 Feb 16 13:58:18 crc kubenswrapper[4812]: I0216 13:58:18.542387 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="615b900f-05af-44e8-90c2-c9617ee55578" containerName="proxy-httpd" containerID="cri-o://c8623f81826725d711e442380d213f93d4f6785647d78eb44c03420ce26b4ac8" gracePeriod=30 Feb 16 13:58:18 crc kubenswrapper[4812]: I0216 13:58:18.542515 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="615b900f-05af-44e8-90c2-c9617ee55578" containerName="ceilometer-notification-agent" containerID="cri-o://e412eb2f1e2f6daac332cee40a2799b14c1cfe01a2b0b12da47c0bb0046ef3fd" gracePeriod=30 Feb 16 13:58:18 crc kubenswrapper[4812]: I0216 13:58:18.552498 4812 generic.go:334] "Generic (PLEG): container finished" podID="a3da896f-2c71-43dc-afdf-6cfc4c1b01ba" containerID="7b85905d2b7bbcd9668614469409330a5a5526964f70094610fc93361a50a223" exitCode=0 Feb 16 13:58:18 crc kubenswrapper[4812]: I0216 13:58:18.553508 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-cqh8x" Feb 16 13:58:18 crc kubenswrapper[4812]: I0216 13:58:18.557244 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-cqh8x" event={"ID":"a3da896f-2c71-43dc-afdf-6cfc4c1b01ba","Type":"ContainerDied","Data":"7b85905d2b7bbcd9668614469409330a5a5526964f70094610fc93361a50a223"} Feb 16 13:58:18 crc kubenswrapper[4812]: I0216 13:58:18.557319 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-cqh8x" event={"ID":"a3da896f-2c71-43dc-afdf-6cfc4c1b01ba","Type":"ContainerDied","Data":"3e5d26d5035ab43a38b3338350df42d468e6d7b790917f50990553b8b6780092"} Feb 16 13:58:18 crc kubenswrapper[4812]: I0216 13:58:18.557347 4812 scope.go:117] "RemoveContainer" containerID="7b85905d2b7bbcd9668614469409330a5a5526964f70094610fc93361a50a223" Feb 16 13:58:18 crc kubenswrapper[4812]: I0216 13:58:18.620040 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kt7q\" (UniqueName: \"kubernetes.io/projected/a3da896f-2c71-43dc-afdf-6cfc4c1b01ba-kube-api-access-7kt7q\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:18 crc kubenswrapper[4812]: I0216 13:58:18.639157 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3da896f-2c71-43dc-afdf-6cfc4c1b01ba-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a3da896f-2c71-43dc-afdf-6cfc4c1b01ba" (UID: "a3da896f-2c71-43dc-afdf-6cfc4c1b01ba"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:58:18 crc kubenswrapper[4812]: I0216 13:58:18.650260 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.608179123 podStartE2EDuration="6.6499678s" podCreationTimestamp="2026-02-16 13:58:12 +0000 UTC" firstStartedPulling="2026-02-16 13:58:13.751943165 +0000 UTC m=+1582.816273866" lastFinishedPulling="2026-02-16 13:58:17.793731832 +0000 UTC m=+1586.858062543" observedRunningTime="2026-02-16 13:58:18.589402068 +0000 UTC m=+1587.653732779" watchObservedRunningTime="2026-02-16 13:58:18.6499678 +0000 UTC m=+1587.714298511" Feb 16 13:58:18 crc kubenswrapper[4812]: I0216 13:58:18.657175 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3da896f-2c71-43dc-afdf-6cfc4c1b01ba-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a3da896f-2c71-43dc-afdf-6cfc4c1b01ba" (UID: "a3da896f-2c71-43dc-afdf-6cfc4c1b01ba"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:58:18 crc kubenswrapper[4812]: I0216 13:58:18.697693 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3da896f-2c71-43dc-afdf-6cfc4c1b01ba-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a3da896f-2c71-43dc-afdf-6cfc4c1b01ba" (UID: "a3da896f-2c71-43dc-afdf-6cfc4c1b01ba"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:58:18 crc kubenswrapper[4812]: I0216 13:58:18.723641 4812 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3da896f-2c71-43dc-afdf-6cfc4c1b01ba-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:18 crc kubenswrapper[4812]: I0216 13:58:18.723694 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3da896f-2c71-43dc-afdf-6cfc4c1b01ba-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:18 crc kubenswrapper[4812]: I0216 13:58:18.723710 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3da896f-2c71-43dc-afdf-6cfc4c1b01ba-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:18 crc kubenswrapper[4812]: I0216 13:58:18.734225 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3da896f-2c71-43dc-afdf-6cfc4c1b01ba-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a3da896f-2c71-43dc-afdf-6cfc4c1b01ba" (UID: "a3da896f-2c71-43dc-afdf-6cfc4c1b01ba"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:58:18 crc kubenswrapper[4812]: I0216 13:58:18.738187 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3da896f-2c71-43dc-afdf-6cfc4c1b01ba-config" (OuterVolumeSpecName: "config") pod "a3da896f-2c71-43dc-afdf-6cfc4c1b01ba" (UID: "a3da896f-2c71-43dc-afdf-6cfc4c1b01ba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 13:58:18 crc kubenswrapper[4812]: I0216 13:58:18.826079 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3da896f-2c71-43dc-afdf-6cfc4c1b01ba-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:18 crc kubenswrapper[4812]: I0216 13:58:18.826137 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3da896f-2c71-43dc-afdf-6cfc4c1b01ba-config\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:18 crc kubenswrapper[4812]: I0216 13:58:18.866842 4812 scope.go:117] "RemoveContainer" containerID="16b65f67e955ceea3558aa49cecd4274a4865f762bafce0feb7ca8d1bc33280d" Feb 16 13:58:18 crc kubenswrapper[4812]: E0216 13:58:18.881753 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 13:58:18 crc kubenswrapper[4812]: I0216 13:58:18.910975 4812 scope.go:117] "RemoveContainer" containerID="7b85905d2b7bbcd9668614469409330a5a5526964f70094610fc93361a50a223" Feb 16 13:58:18 crc kubenswrapper[4812]: E0216 13:58:18.911714 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b85905d2b7bbcd9668614469409330a5a5526964f70094610fc93361a50a223\": container with ID starting with 7b85905d2b7bbcd9668614469409330a5a5526964f70094610fc93361a50a223 not found: ID does not exist" containerID="7b85905d2b7bbcd9668614469409330a5a5526964f70094610fc93361a50a223" Feb 16 13:58:18 crc kubenswrapper[4812]: I0216 13:58:18.911764 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b85905d2b7bbcd9668614469409330a5a5526964f70094610fc93361a50a223"} err="failed to get container status \"7b85905d2b7bbcd9668614469409330a5a5526964f70094610fc93361a50a223\": rpc error: code = NotFound desc = could not find container \"7b85905d2b7bbcd9668614469409330a5a5526964f70094610fc93361a50a223\": container with ID starting with 7b85905d2b7bbcd9668614469409330a5a5526964f70094610fc93361a50a223 not found: ID does not exist" Feb 16 13:58:18 crc kubenswrapper[4812]: I0216 13:58:18.911795 4812 scope.go:117] "RemoveContainer" containerID="16b65f67e955ceea3558aa49cecd4274a4865f762bafce0feb7ca8d1bc33280d" Feb 16 13:58:18 crc kubenswrapper[4812]: I0216 13:58:18.911863 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-cqh8x"] Feb 16 13:58:18 crc kubenswrapper[4812]: E0216 13:58:18.912219 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16b65f67e955ceea3558aa49cecd4274a4865f762bafce0feb7ca8d1bc33280d\": container with ID starting with 16b65f67e955ceea3558aa49cecd4274a4865f762bafce0feb7ca8d1bc33280d not found: ID does not exist" containerID="16b65f67e955ceea3558aa49cecd4274a4865f762bafce0feb7ca8d1bc33280d" Feb 16 13:58:18 crc kubenswrapper[4812]: I0216 13:58:18.912248 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16b65f67e955ceea3558aa49cecd4274a4865f762bafce0feb7ca8d1bc33280d"} err="failed to get container status \"16b65f67e955ceea3558aa49cecd4274a4865f762bafce0feb7ca8d1bc33280d\": rpc error: code = NotFound desc = could not find container \"16b65f67e955ceea3558aa49cecd4274a4865f762bafce0feb7ca8d1bc33280d\": container with ID starting with 16b65f67e955ceea3558aa49cecd4274a4865f762bafce0feb7ca8d1bc33280d not found: ID does not exist" Feb 16 13:58:18 crc kubenswrapper[4812]: I0216 13:58:18.927730 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-cqh8x"] Feb 16 13:58:19 crc kubenswrapper[4812]: I0216 13:58:19.569852 4812 generic.go:334] "Generic (PLEG): container finished" podID="615b900f-05af-44e8-90c2-c9617ee55578" containerID="c8623f81826725d711e442380d213f93d4f6785647d78eb44c03420ce26b4ac8" exitCode=0 Feb 16 13:58:19 crc kubenswrapper[4812]: I0216 13:58:19.569959 4812 generic.go:334] "Generic (PLEG): container finished" podID="615b900f-05af-44e8-90c2-c9617ee55578" containerID="e09f7603dff0c9aee70023b1139406428d2026e9fe233076742028df3d6e0e41" exitCode=2 Feb 16 13:58:19 crc kubenswrapper[4812]: I0216 13:58:19.569972 4812 generic.go:334] "Generic (PLEG): container finished" podID="615b900f-05af-44e8-90c2-c9617ee55578" containerID="e412eb2f1e2f6daac332cee40a2799b14c1cfe01a2b0b12da47c0bb0046ef3fd" exitCode=0 Feb 16 13:58:19 crc kubenswrapper[4812]: I0216 13:58:19.570053 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"615b900f-05af-44e8-90c2-c9617ee55578","Type":"ContainerDied","Data":"c8623f81826725d711e442380d213f93d4f6785647d78eb44c03420ce26b4ac8"} Feb 16 13:58:19 crc kubenswrapper[4812]: I0216 13:58:19.570102 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"615b900f-05af-44e8-90c2-c9617ee55578","Type":"ContainerDied","Data":"e09f7603dff0c9aee70023b1139406428d2026e9fe233076742028df3d6e0e41"} Feb 16 13:58:19 crc kubenswrapper[4812]: I0216 13:58:19.570114 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"615b900f-05af-44e8-90c2-c9617ee55578","Type":"ContainerDied","Data":"e412eb2f1e2f6daac332cee40a2799b14c1cfe01a2b0b12da47c0bb0046ef3fd"} Feb 16 13:58:19 crc kubenswrapper[4812]: I0216 13:58:19.898551 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3da896f-2c71-43dc-afdf-6cfc4c1b01ba" path="/var/lib/kubelet/pods/a3da896f-2c71-43dc-afdf-6cfc4c1b01ba/volumes" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.322633 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.445901 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/615b900f-05af-44e8-90c2-c9617ee55578-sg-core-conf-yaml\") pod \"615b900f-05af-44e8-90c2-c9617ee55578\" (UID: \"615b900f-05af-44e8-90c2-c9617ee55578\") " Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.446027 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/615b900f-05af-44e8-90c2-c9617ee55578-run-httpd\") pod \"615b900f-05af-44e8-90c2-c9617ee55578\" (UID: \"615b900f-05af-44e8-90c2-c9617ee55578\") " Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.446089 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/615b900f-05af-44e8-90c2-c9617ee55578-ceilometer-tls-certs\") pod \"615b900f-05af-44e8-90c2-c9617ee55578\" (UID: \"615b900f-05af-44e8-90c2-c9617ee55578\") " Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.446123 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djxtm\" (UniqueName: \"kubernetes.io/projected/615b900f-05af-44e8-90c2-c9617ee55578-kube-api-access-djxtm\") pod \"615b900f-05af-44e8-90c2-c9617ee55578\" (UID: \"615b900f-05af-44e8-90c2-c9617ee55578\") " Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.446422 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/615b900f-05af-44e8-90c2-c9617ee55578-log-httpd\") pod \"615b900f-05af-44e8-90c2-c9617ee55578\" (UID: \"615b900f-05af-44e8-90c2-c9617ee55578\") " Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.446488 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/615b900f-05af-44e8-90c2-c9617ee55578-config-data\") pod \"615b900f-05af-44e8-90c2-c9617ee55578\" (UID: \"615b900f-05af-44e8-90c2-c9617ee55578\") " Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.446512 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/615b900f-05af-44e8-90c2-c9617ee55578-scripts\") pod \"615b900f-05af-44e8-90c2-c9617ee55578\" (UID: \"615b900f-05af-44e8-90c2-c9617ee55578\") " Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.446612 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/615b900f-05af-44e8-90c2-c9617ee55578-combined-ca-bundle\") pod \"615b900f-05af-44e8-90c2-c9617ee55578\" (UID: \"615b900f-05af-44e8-90c2-c9617ee55578\") " Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.451088 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/615b900f-05af-44e8-90c2-c9617ee55578-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "615b900f-05af-44e8-90c2-c9617ee55578" (UID: "615b900f-05af-44e8-90c2-c9617ee55578"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.452459 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/615b900f-05af-44e8-90c2-c9617ee55578-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "615b900f-05af-44e8-90c2-c9617ee55578" (UID: "615b900f-05af-44e8-90c2-c9617ee55578"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.457437 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/615b900f-05af-44e8-90c2-c9617ee55578-kube-api-access-djxtm" (OuterVolumeSpecName: "kube-api-access-djxtm") pod "615b900f-05af-44e8-90c2-c9617ee55578" (UID: "615b900f-05af-44e8-90c2-c9617ee55578"). InnerVolumeSpecName "kube-api-access-djxtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.459406 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/615b900f-05af-44e8-90c2-c9617ee55578-scripts" (OuterVolumeSpecName: "scripts") pod "615b900f-05af-44e8-90c2-c9617ee55578" (UID: "615b900f-05af-44e8-90c2-c9617ee55578"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.512978 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/615b900f-05af-44e8-90c2-c9617ee55578-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "615b900f-05af-44e8-90c2-c9617ee55578" (UID: "615b900f-05af-44e8-90c2-c9617ee55578"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.539956 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/615b900f-05af-44e8-90c2-c9617ee55578-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "615b900f-05af-44e8-90c2-c9617ee55578" (UID: "615b900f-05af-44e8-90c2-c9617ee55578"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.550583 4812 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/615b900f-05af-44e8-90c2-c9617ee55578-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.550954 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/615b900f-05af-44e8-90c2-c9617ee55578-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.551016 4812 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/615b900f-05af-44e8-90c2-c9617ee55578-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.551071 4812 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/615b900f-05af-44e8-90c2-c9617ee55578-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.551175 4812 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/615b900f-05af-44e8-90c2-c9617ee55578-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.551230 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djxtm\" (UniqueName: \"kubernetes.io/projected/615b900f-05af-44e8-90c2-c9617ee55578-kube-api-access-djxtm\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.577472 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/615b900f-05af-44e8-90c2-c9617ee55578-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "615b900f-05af-44e8-90c2-c9617ee55578" (UID: "615b900f-05af-44e8-90c2-c9617ee55578"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.623163 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/615b900f-05af-44e8-90c2-c9617ee55578-config-data" (OuterVolumeSpecName: "config-data") pod "615b900f-05af-44e8-90c2-c9617ee55578" (UID: "615b900f-05af-44e8-90c2-c9617ee55578"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.631877 4812 generic.go:334] "Generic (PLEG): container finished" podID="615b900f-05af-44e8-90c2-c9617ee55578" containerID="b4e8d2a313895d1778c8d413324e0aba3725237d59482af1de7859fa510d3210" exitCode=0 Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.631966 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"615b900f-05af-44e8-90c2-c9617ee55578","Type":"ContainerDied","Data":"b4e8d2a313895d1778c8d413324e0aba3725237d59482af1de7859fa510d3210"} Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.632014 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.632050 4812 scope.go:117] "RemoveContainer" containerID="c8623f81826725d711e442380d213f93d4f6785647d78eb44c03420ce26b4ac8" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.632029 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"615b900f-05af-44e8-90c2-c9617ee55578","Type":"ContainerDied","Data":"85f46dde410e65137f082b002ba7b147045c469e91c9217f11898c9210a01675"} Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.654830 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/615b900f-05af-44e8-90c2-c9617ee55578-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.654898 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/615b900f-05af-44e8-90c2-c9617ee55578-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.682700 4812 scope.go:117] "RemoveContainer" containerID="e09f7603dff0c9aee70023b1139406428d2026e9fe233076742028df3d6e0e41" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.702967 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.720299 4812 scope.go:117] "RemoveContainer" containerID="e412eb2f1e2f6daac332cee40a2799b14c1cfe01a2b0b12da47c0bb0046ef3fd" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.725681 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.745396 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:58:22 crc kubenswrapper[4812]: E0216 13:58:22.746291 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="615b900f-05af-44e8-90c2-c9617ee55578" containerName="ceilometer-notification-agent" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.746323 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="615b900f-05af-44e8-90c2-c9617ee55578" containerName="ceilometer-notification-agent" Feb 16 13:58:22 crc kubenswrapper[4812]: E0216 13:58:22.746360 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3da896f-2c71-43dc-afdf-6cfc4c1b01ba" containerName="dnsmasq-dns" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.746368 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3da896f-2c71-43dc-afdf-6cfc4c1b01ba" containerName="dnsmasq-dns" Feb 16 13:58:22 crc kubenswrapper[4812]: E0216 13:58:22.746379 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="615b900f-05af-44e8-90c2-c9617ee55578" containerName="ceilometer-central-agent" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.746386 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="615b900f-05af-44e8-90c2-c9617ee55578" containerName="ceilometer-central-agent" Feb 16 13:58:22 crc kubenswrapper[4812]: E0216 13:58:22.746405 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="615b900f-05af-44e8-90c2-c9617ee55578" containerName="proxy-httpd" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.746412 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="615b900f-05af-44e8-90c2-c9617ee55578" containerName="proxy-httpd" Feb 16 13:58:22 crc kubenswrapper[4812]: E0216 13:58:22.746426 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="615b900f-05af-44e8-90c2-c9617ee55578" containerName="sg-core" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.746432 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="615b900f-05af-44e8-90c2-c9617ee55578" containerName="sg-core" Feb 16 13:58:22 crc kubenswrapper[4812]: E0216 13:58:22.746491 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3da896f-2c71-43dc-afdf-6cfc4c1b01ba" containerName="init" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.746501 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3da896f-2c71-43dc-afdf-6cfc4c1b01ba" containerName="init" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.746789 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="615b900f-05af-44e8-90c2-c9617ee55578" containerName="sg-core" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.746812 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3da896f-2c71-43dc-afdf-6cfc4c1b01ba" containerName="dnsmasq-dns" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.746830 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="615b900f-05af-44e8-90c2-c9617ee55578" containerName="proxy-httpd" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.746843 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="615b900f-05af-44e8-90c2-c9617ee55578" containerName="ceilometer-notification-agent" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.746857 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="615b900f-05af-44e8-90c2-c9617ee55578" containerName="ceilometer-central-agent" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.755658 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.761856 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.766009 4812 scope.go:117] "RemoveContainer" containerID="b4e8d2a313895d1778c8d413324e0aba3725237d59482af1de7859fa510d3210" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.766573 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.767616 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.767887 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.844641 4812 scope.go:117] "RemoveContainer" containerID="c8623f81826725d711e442380d213f93d4f6785647d78eb44c03420ce26b4ac8" Feb 16 13:58:22 crc kubenswrapper[4812]: E0216 13:58:22.845411 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8623f81826725d711e442380d213f93d4f6785647d78eb44c03420ce26b4ac8\": container with ID starting with c8623f81826725d711e442380d213f93d4f6785647d78eb44c03420ce26b4ac8 not found: ID does not exist" containerID="c8623f81826725d711e442380d213f93d4f6785647d78eb44c03420ce26b4ac8" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.845513 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8623f81826725d711e442380d213f93d4f6785647d78eb44c03420ce26b4ac8"} err="failed to get container status \"c8623f81826725d711e442380d213f93d4f6785647d78eb44c03420ce26b4ac8\": rpc error: code = NotFound desc = could not find container \"c8623f81826725d711e442380d213f93d4f6785647d78eb44c03420ce26b4ac8\": container with ID starting with c8623f81826725d711e442380d213f93d4f6785647d78eb44c03420ce26b4ac8 not found: ID does not exist" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.845564 4812 scope.go:117] "RemoveContainer" containerID="e09f7603dff0c9aee70023b1139406428d2026e9fe233076742028df3d6e0e41" Feb 16 13:58:22 crc kubenswrapper[4812]: E0216 13:58:22.846183 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e09f7603dff0c9aee70023b1139406428d2026e9fe233076742028df3d6e0e41\": container with ID starting with e09f7603dff0c9aee70023b1139406428d2026e9fe233076742028df3d6e0e41 not found: ID does not exist" containerID="e09f7603dff0c9aee70023b1139406428d2026e9fe233076742028df3d6e0e41" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.846228 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e09f7603dff0c9aee70023b1139406428d2026e9fe233076742028df3d6e0e41"} err="failed to get container status \"e09f7603dff0c9aee70023b1139406428d2026e9fe233076742028df3d6e0e41\": rpc error: code = NotFound desc = could not find container \"e09f7603dff0c9aee70023b1139406428d2026e9fe233076742028df3d6e0e41\": container with ID starting with e09f7603dff0c9aee70023b1139406428d2026e9fe233076742028df3d6e0e41 not found: ID does not exist" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.846261 4812 scope.go:117] "RemoveContainer" containerID="e412eb2f1e2f6daac332cee40a2799b14c1cfe01a2b0b12da47c0bb0046ef3fd" Feb 16 13:58:22 crc kubenswrapper[4812]: E0216 13:58:22.846701 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e412eb2f1e2f6daac332cee40a2799b14c1cfe01a2b0b12da47c0bb0046ef3fd\": container with ID starting with e412eb2f1e2f6daac332cee40a2799b14c1cfe01a2b0b12da47c0bb0046ef3fd not found: ID does not exist" containerID="e412eb2f1e2f6daac332cee40a2799b14c1cfe01a2b0b12da47c0bb0046ef3fd" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.846760 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e412eb2f1e2f6daac332cee40a2799b14c1cfe01a2b0b12da47c0bb0046ef3fd"} err="failed to get container status \"e412eb2f1e2f6daac332cee40a2799b14c1cfe01a2b0b12da47c0bb0046ef3fd\": rpc error: code = NotFound desc = could not find container \"e412eb2f1e2f6daac332cee40a2799b14c1cfe01a2b0b12da47c0bb0046ef3fd\": container with ID starting with e412eb2f1e2f6daac332cee40a2799b14c1cfe01a2b0b12da47c0bb0046ef3fd not found: ID does not exist" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.846803 4812 scope.go:117] "RemoveContainer" containerID="b4e8d2a313895d1778c8d413324e0aba3725237d59482af1de7859fa510d3210" Feb 16 13:58:22 crc kubenswrapper[4812]: E0216 13:58:22.847540 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4e8d2a313895d1778c8d413324e0aba3725237d59482af1de7859fa510d3210\": container with ID starting with b4e8d2a313895d1778c8d413324e0aba3725237d59482af1de7859fa510d3210 not found: ID does not exist" containerID="b4e8d2a313895d1778c8d413324e0aba3725237d59482af1de7859fa510d3210" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.847573 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4e8d2a313895d1778c8d413324e0aba3725237d59482af1de7859fa510d3210"} err="failed to get container status \"b4e8d2a313895d1778c8d413324e0aba3725237d59482af1de7859fa510d3210\": rpc error: code = NotFound desc = could not find container \"b4e8d2a313895d1778c8d413324e0aba3725237d59482af1de7859fa510d3210\": container with ID starting with b4e8d2a313895d1778c8d413324e0aba3725237d59482af1de7859fa510d3210 not found: ID does not exist" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.861527 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dae1afc9-20e3-4925-bcbf-cda49f1f4011-config-data\") pod \"ceilometer-0\" (UID: \"dae1afc9-20e3-4925-bcbf-cda49f1f4011\") " pod="openstack/ceilometer-0" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.861633 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dae1afc9-20e3-4925-bcbf-cda49f1f4011-log-httpd\") pod \"ceilometer-0\" (UID: \"dae1afc9-20e3-4925-bcbf-cda49f1f4011\") " pod="openstack/ceilometer-0" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.861697 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dae1afc9-20e3-4925-bcbf-cda49f1f4011-run-httpd\") pod \"ceilometer-0\" (UID: \"dae1afc9-20e3-4925-bcbf-cda49f1f4011\") " pod="openstack/ceilometer-0" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.861750 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dae1afc9-20e3-4925-bcbf-cda49f1f4011-scripts\") pod \"ceilometer-0\" (UID: \"dae1afc9-20e3-4925-bcbf-cda49f1f4011\") " pod="openstack/ceilometer-0" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.861786 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krwmg\" (UniqueName: \"kubernetes.io/projected/dae1afc9-20e3-4925-bcbf-cda49f1f4011-kube-api-access-krwmg\") pod \"ceilometer-0\" (UID: \"dae1afc9-20e3-4925-bcbf-cda49f1f4011\") " pod="openstack/ceilometer-0" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.861833 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dae1afc9-20e3-4925-bcbf-cda49f1f4011-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dae1afc9-20e3-4925-bcbf-cda49f1f4011\") " pod="openstack/ceilometer-0" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.861885 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dae1afc9-20e3-4925-bcbf-cda49f1f4011-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dae1afc9-20e3-4925-bcbf-cda49f1f4011\") " pod="openstack/ceilometer-0" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.861918 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae1afc9-20e3-4925-bcbf-cda49f1f4011-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dae1afc9-20e3-4925-bcbf-cda49f1f4011\") " pod="openstack/ceilometer-0" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.965675 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dae1afc9-20e3-4925-bcbf-cda49f1f4011-config-data\") pod \"ceilometer-0\" (UID: \"dae1afc9-20e3-4925-bcbf-cda49f1f4011\") " pod="openstack/ceilometer-0" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.965827 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dae1afc9-20e3-4925-bcbf-cda49f1f4011-log-httpd\") pod \"ceilometer-0\" (UID: \"dae1afc9-20e3-4925-bcbf-cda49f1f4011\") " pod="openstack/ceilometer-0" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.965900 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dae1afc9-20e3-4925-bcbf-cda49f1f4011-run-httpd\") pod \"ceilometer-0\" (UID: \"dae1afc9-20e3-4925-bcbf-cda49f1f4011\") " pod="openstack/ceilometer-0" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.965931 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dae1afc9-20e3-4925-bcbf-cda49f1f4011-scripts\") pod \"ceilometer-0\" (UID: \"dae1afc9-20e3-4925-bcbf-cda49f1f4011\") " pod="openstack/ceilometer-0" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.965984 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krwmg\" (UniqueName: \"kubernetes.io/projected/dae1afc9-20e3-4925-bcbf-cda49f1f4011-kube-api-access-krwmg\") pod \"ceilometer-0\" (UID: \"dae1afc9-20e3-4925-bcbf-cda49f1f4011\") " pod="openstack/ceilometer-0" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.966054 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dae1afc9-20e3-4925-bcbf-cda49f1f4011-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dae1afc9-20e3-4925-bcbf-cda49f1f4011\") " pod="openstack/ceilometer-0" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.966104 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dae1afc9-20e3-4925-bcbf-cda49f1f4011-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dae1afc9-20e3-4925-bcbf-cda49f1f4011\") " pod="openstack/ceilometer-0" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.966150 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae1afc9-20e3-4925-bcbf-cda49f1f4011-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dae1afc9-20e3-4925-bcbf-cda49f1f4011\") " pod="openstack/ceilometer-0" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.966794 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dae1afc9-20e3-4925-bcbf-cda49f1f4011-log-httpd\") pod \"ceilometer-0\" (UID: \"dae1afc9-20e3-4925-bcbf-cda49f1f4011\") " pod="openstack/ceilometer-0" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.968558 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dae1afc9-20e3-4925-bcbf-cda49f1f4011-run-httpd\") pod \"ceilometer-0\" (UID: \"dae1afc9-20e3-4925-bcbf-cda49f1f4011\") " pod="openstack/ceilometer-0" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.974353 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dae1afc9-20e3-4925-bcbf-cda49f1f4011-scripts\") pod \"ceilometer-0\" (UID: \"dae1afc9-20e3-4925-bcbf-cda49f1f4011\") " pod="openstack/ceilometer-0" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.974914 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dae1afc9-20e3-4925-bcbf-cda49f1f4011-config-data\") pod \"ceilometer-0\" (UID: \"dae1afc9-20e3-4925-bcbf-cda49f1f4011\") " pod="openstack/ceilometer-0" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.975041 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dae1afc9-20e3-4925-bcbf-cda49f1f4011-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dae1afc9-20e3-4925-bcbf-cda49f1f4011\") " pod="openstack/ceilometer-0" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.975303 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae1afc9-20e3-4925-bcbf-cda49f1f4011-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dae1afc9-20e3-4925-bcbf-cda49f1f4011\") " pod="openstack/ceilometer-0" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.976498 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dae1afc9-20e3-4925-bcbf-cda49f1f4011-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dae1afc9-20e3-4925-bcbf-cda49f1f4011\") " pod="openstack/ceilometer-0" Feb 16 13:58:22 crc kubenswrapper[4812]: I0216 13:58:22.989865 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krwmg\" (UniqueName: \"kubernetes.io/projected/dae1afc9-20e3-4925-bcbf-cda49f1f4011-kube-api-access-krwmg\") pod \"ceilometer-0\" (UID: \"dae1afc9-20e3-4925-bcbf-cda49f1f4011\") " pod="openstack/ceilometer-0" Feb 16 13:58:23 crc kubenswrapper[4812]: I0216 13:58:23.143830 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 13:58:23 crc kubenswrapper[4812]: I0216 13:58:23.711580 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 13:58:23 crc kubenswrapper[4812]: I0216 13:58:23.904278 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="615b900f-05af-44e8-90c2-c9617ee55578" path="/var/lib/kubelet/pods/615b900f-05af-44e8-90c2-c9617ee55578/volumes" Feb 16 13:58:24 crc kubenswrapper[4812]: I0216 13:58:24.683821 4812 generic.go:334] "Generic (PLEG): container finished" podID="c77cba8e-f37e-4a5f-a795-13999695c004" containerID="4af6b842f6da140a89f3af5019348860b68855e3bb018ba6d2d7b598b72ca632" exitCode=0 Feb 16 13:58:24 crc kubenswrapper[4812]: I0216 13:58:24.683980 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-p7tcs" event={"ID":"c77cba8e-f37e-4a5f-a795-13999695c004","Type":"ContainerDied","Data":"4af6b842f6da140a89f3af5019348860b68855e3bb018ba6d2d7b598b72ca632"} Feb 16 13:58:24 crc kubenswrapper[4812]: I0216 13:58:24.687262 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dae1afc9-20e3-4925-bcbf-cda49f1f4011","Type":"ContainerStarted","Data":"4b6251f236b79e3909c4d60154f8bc952af56c417e4c030cef3e0bd3847f564b"} Feb 16 13:58:24 crc kubenswrapper[4812]: I0216 13:58:24.687305 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dae1afc9-20e3-4925-bcbf-cda49f1f4011","Type":"ContainerStarted","Data":"bc560139295d68c66b6e36ded0ebc6dc54f6f581fc2a46ba28dc0b1ea4fa7fb9"} Feb 16 13:58:25 crc kubenswrapper[4812]: I0216 13:58:25.175821 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 13:58:25 crc kubenswrapper[4812]: I0216 13:58:25.175887 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 13:58:25 crc kubenswrapper[4812]: I0216 13:58:25.706330 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dae1afc9-20e3-4925-bcbf-cda49f1f4011","Type":"ContainerStarted","Data":"dfd9d2f1b31e9f2d90a2990e3594dd253cecc583f743c26b01be0c94343c780d"} Feb 16 13:58:25 crc kubenswrapper[4812]: I0216 13:58:25.879839 4812 scope.go:117] "RemoveContainer" containerID="fe1a8ada12fd81917eb3f115f0ed57838961e98daadca433ae78ae9036026fef" Feb 16 13:58:25 crc kubenswrapper[4812]: E0216 13:58:25.880546 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 13:58:26 crc kubenswrapper[4812]: I0216 13:58:26.196968 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="45ac10bb-0132-41c8-9f99-4c5a266ece13" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.225:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 13:58:26 crc kubenswrapper[4812]: I0216 13:58:26.197983 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="45ac10bb-0132-41c8-9f99-4c5a266ece13" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.225:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 13:58:26 crc kubenswrapper[4812]: I0216 13:58:26.290592 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-p7tcs" Feb 16 13:58:26 crc kubenswrapper[4812]: I0216 13:58:26.391781 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8v78\" (UniqueName: \"kubernetes.io/projected/c77cba8e-f37e-4a5f-a795-13999695c004-kube-api-access-v8v78\") pod \"c77cba8e-f37e-4a5f-a795-13999695c004\" (UID: \"c77cba8e-f37e-4a5f-a795-13999695c004\") " Feb 16 13:58:26 crc kubenswrapper[4812]: I0216 13:58:26.391923 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c77cba8e-f37e-4a5f-a795-13999695c004-scripts\") pod \"c77cba8e-f37e-4a5f-a795-13999695c004\" (UID: \"c77cba8e-f37e-4a5f-a795-13999695c004\") " Feb 16 13:58:26 crc kubenswrapper[4812]: I0216 13:58:26.392185 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c77cba8e-f37e-4a5f-a795-13999695c004-combined-ca-bundle\") pod \"c77cba8e-f37e-4a5f-a795-13999695c004\" (UID: \"c77cba8e-f37e-4a5f-a795-13999695c004\") " Feb 16 13:58:26 crc kubenswrapper[4812]: I0216 13:58:26.392488 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c77cba8e-f37e-4a5f-a795-13999695c004-config-data\") pod \"c77cba8e-f37e-4a5f-a795-13999695c004\" (UID: \"c77cba8e-f37e-4a5f-a795-13999695c004\") " Feb 16 13:58:26 crc kubenswrapper[4812]: I0216 13:58:26.406276 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c77cba8e-f37e-4a5f-a795-13999695c004-kube-api-access-v8v78" (OuterVolumeSpecName: "kube-api-access-v8v78") pod "c77cba8e-f37e-4a5f-a795-13999695c004" (UID: "c77cba8e-f37e-4a5f-a795-13999695c004"). InnerVolumeSpecName "kube-api-access-v8v78". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:58:26 crc kubenswrapper[4812]: I0216 13:58:26.406939 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c77cba8e-f37e-4a5f-a795-13999695c004-scripts" (OuterVolumeSpecName: "scripts") pod "c77cba8e-f37e-4a5f-a795-13999695c004" (UID: "c77cba8e-f37e-4a5f-a795-13999695c004"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:58:26 crc kubenswrapper[4812]: I0216 13:58:26.438578 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c77cba8e-f37e-4a5f-a795-13999695c004-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c77cba8e-f37e-4a5f-a795-13999695c004" (UID: "c77cba8e-f37e-4a5f-a795-13999695c004"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:58:26 crc kubenswrapper[4812]: I0216 13:58:26.440595 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c77cba8e-f37e-4a5f-a795-13999695c004-config-data" (OuterVolumeSpecName: "config-data") pod "c77cba8e-f37e-4a5f-a795-13999695c004" (UID: "c77cba8e-f37e-4a5f-a795-13999695c004"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:58:26 crc kubenswrapper[4812]: I0216 13:58:26.496255 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8v78\" (UniqueName: \"kubernetes.io/projected/c77cba8e-f37e-4a5f-a795-13999695c004-kube-api-access-v8v78\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:26 crc kubenswrapper[4812]: I0216 13:58:26.496315 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c77cba8e-f37e-4a5f-a795-13999695c004-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:26 crc kubenswrapper[4812]: I0216 13:58:26.496330 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c77cba8e-f37e-4a5f-a795-13999695c004-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:26 crc kubenswrapper[4812]: I0216 13:58:26.496341 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c77cba8e-f37e-4a5f-a795-13999695c004-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:26 crc kubenswrapper[4812]: I0216 13:58:26.723528 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-p7tcs" event={"ID":"c77cba8e-f37e-4a5f-a795-13999695c004","Type":"ContainerDied","Data":"94b2d09a2ee007af52e54557bb49ea1d2d4ded76059700389b1090d14bb726ff"} Feb 16 13:58:26 crc kubenswrapper[4812]: I0216 13:58:26.723594 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94b2d09a2ee007af52e54557bb49ea1d2d4ded76059700389b1090d14bb726ff" Feb 16 13:58:26 crc kubenswrapper[4812]: I0216 13:58:26.723704 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-p7tcs" Feb 16 13:58:26 crc kubenswrapper[4812]: I0216 13:58:26.735574 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dae1afc9-20e3-4925-bcbf-cda49f1f4011","Type":"ContainerStarted","Data":"acd7defe63a9ac55bbef4800983402e482074be673619d99cc2e6d033bb8e694"} Feb 16 13:58:26 crc kubenswrapper[4812]: I0216 13:58:26.941337 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:58:26 crc kubenswrapper[4812]: I0216 13:58:26.943549 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="45ac10bb-0132-41c8-9f99-4c5a266ece13" containerName="nova-api-api" containerID="cri-o://9495d8dee6709ab51431f1ae08f915cc1e416daee59067c145911641531d36cc" gracePeriod=30 Feb 16 13:58:26 crc kubenswrapper[4812]: I0216 13:58:26.943797 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="45ac10bb-0132-41c8-9f99-4c5a266ece13" containerName="nova-api-log" containerID="cri-o://300c5202473b0ce968aad65431a1b75acbbb464462962ba2653f8632fe0356d7" gracePeriod=30 Feb 16 13:58:26 crc kubenswrapper[4812]: I0216 13:58:26.965643 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:58:26 crc kubenswrapper[4812]: I0216 13:58:26.965983 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="16fc436e-ef2a-4aa9-ad5f-1da36fb18a41" containerName="nova-scheduler-scheduler" containerID="cri-o://eb54dbfc6f57d2bf16293e83c97b308738834698c42fd8028cbc20cb07c6bd40" gracePeriod=30 Feb 16 13:58:27 crc kubenswrapper[4812]: I0216 13:58:27.103264 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:58:27 crc kubenswrapper[4812]: I0216 13:58:27.103816 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ad25a505-b306-47ee-92dc-19b8635d455b" containerName="nova-metadata-log" containerID="cri-o://745d592566322b077db01ae41e9d70c3784bcc9fe8836eb667517adc56012d7d" gracePeriod=30 Feb 16 13:58:27 crc kubenswrapper[4812]: I0216 13:58:27.104097 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ad25a505-b306-47ee-92dc-19b8635d455b" containerName="nova-metadata-metadata" containerID="cri-o://5e98e4a545cb2ffe6fcfb23c9b6bb86f3bb9674d1fb7c6bcfe38d80ddb565d57" gracePeriod=30 Feb 16 13:58:27 crc kubenswrapper[4812]: I0216 13:58:27.755050 4812 generic.go:334] "Generic (PLEG): container finished" podID="45ac10bb-0132-41c8-9f99-4c5a266ece13" containerID="300c5202473b0ce968aad65431a1b75acbbb464462962ba2653f8632fe0356d7" exitCode=143 Feb 16 13:58:27 crc kubenswrapper[4812]: I0216 13:58:27.755315 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"45ac10bb-0132-41c8-9f99-4c5a266ece13","Type":"ContainerDied","Data":"300c5202473b0ce968aad65431a1b75acbbb464462962ba2653f8632fe0356d7"} Feb 16 13:58:27 crc kubenswrapper[4812]: I0216 13:58:27.768070 4812 generic.go:334] "Generic (PLEG): container finished" podID="ad25a505-b306-47ee-92dc-19b8635d455b" containerID="745d592566322b077db01ae41e9d70c3784bcc9fe8836eb667517adc56012d7d" exitCode=143 Feb 16 13:58:27 crc kubenswrapper[4812]: I0216 13:58:27.768155 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ad25a505-b306-47ee-92dc-19b8635d455b","Type":"ContainerDied","Data":"745d592566322b077db01ae41e9d70c3784bcc9fe8836eb667517adc56012d7d"} Feb 16 13:58:27 crc kubenswrapper[4812]: E0216 13:58:27.955326 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="eb54dbfc6f57d2bf16293e83c97b308738834698c42fd8028cbc20cb07c6bd40" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 13:58:27 crc kubenswrapper[4812]: E0216 13:58:27.958617 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="eb54dbfc6f57d2bf16293e83c97b308738834698c42fd8028cbc20cb07c6bd40" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 13:58:27 crc kubenswrapper[4812]: E0216 13:58:27.961256 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="eb54dbfc6f57d2bf16293e83c97b308738834698c42fd8028cbc20cb07c6bd40" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 13:58:27 crc kubenswrapper[4812]: E0216 13:58:27.961356 4812 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="16fc436e-ef2a-4aa9-ad5f-1da36fb18a41" containerName="nova-scheduler-scheduler" Feb 16 13:58:28 crc kubenswrapper[4812]: I0216 13:58:28.784426 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dae1afc9-20e3-4925-bcbf-cda49f1f4011","Type":"ContainerStarted","Data":"f62a33fd5fa3d7c832cf61d2d9358d4e44bac099dcc02e233f7190216306da5d"} Feb 16 13:58:28 crc kubenswrapper[4812]: I0216 13:58:28.785265 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 13:58:28 crc kubenswrapper[4812]: I0216 13:58:28.788601 4812 generic.go:334] "Generic (PLEG): container finished" podID="16fc436e-ef2a-4aa9-ad5f-1da36fb18a41" containerID="eb54dbfc6f57d2bf16293e83c97b308738834698c42fd8028cbc20cb07c6bd40" exitCode=0 Feb 16 13:58:28 crc kubenswrapper[4812]: I0216 13:58:28.788678 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"16fc436e-ef2a-4aa9-ad5f-1da36fb18a41","Type":"ContainerDied","Data":"eb54dbfc6f57d2bf16293e83c97b308738834698c42fd8028cbc20cb07c6bd40"} Feb 16 13:58:28 crc kubenswrapper[4812]: I0216 13:58:28.788714 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"16fc436e-ef2a-4aa9-ad5f-1da36fb18a41","Type":"ContainerDied","Data":"1c96790a3a73d50b81fffe7e6b4901ec61efb936d49210371cd4435550118979"} Feb 16 13:58:28 crc kubenswrapper[4812]: I0216 13:58:28.788729 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c96790a3a73d50b81fffe7e6b4901ec61efb936d49210371cd4435550118979" Feb 16 13:58:28 crc kubenswrapper[4812]: I0216 13:58:28.815568 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.120956196 podStartE2EDuration="6.81553301s" podCreationTimestamp="2026-02-16 13:58:22 +0000 UTC" firstStartedPulling="2026-02-16 13:58:23.795072322 +0000 UTC m=+1592.859403013" lastFinishedPulling="2026-02-16 13:58:27.489649126 +0000 UTC m=+1596.553979827" observedRunningTime="2026-02-16 13:58:28.811052621 +0000 UTC m=+1597.875383342" watchObservedRunningTime="2026-02-16 13:58:28.81553301 +0000 UTC m=+1597.879863741" Feb 16 13:58:28 crc kubenswrapper[4812]: I0216 13:58:28.869009 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 13:58:28 crc kubenswrapper[4812]: I0216 13:58:28.985731 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16fc436e-ef2a-4aa9-ad5f-1da36fb18a41-config-data\") pod \"16fc436e-ef2a-4aa9-ad5f-1da36fb18a41\" (UID: \"16fc436e-ef2a-4aa9-ad5f-1da36fb18a41\") " Feb 16 13:58:28 crc kubenswrapper[4812]: I0216 13:58:28.985833 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16fc436e-ef2a-4aa9-ad5f-1da36fb18a41-combined-ca-bundle\") pod \"16fc436e-ef2a-4aa9-ad5f-1da36fb18a41\" (UID: \"16fc436e-ef2a-4aa9-ad5f-1da36fb18a41\") " Feb 16 13:58:28 crc kubenswrapper[4812]: I0216 13:58:28.985981 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5jqv\" (UniqueName: \"kubernetes.io/projected/16fc436e-ef2a-4aa9-ad5f-1da36fb18a41-kube-api-access-b5jqv\") pod \"16fc436e-ef2a-4aa9-ad5f-1da36fb18a41\" (UID: \"16fc436e-ef2a-4aa9-ad5f-1da36fb18a41\") " Feb 16 13:58:29 crc kubenswrapper[4812]: I0216 13:58:29.015878 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16fc436e-ef2a-4aa9-ad5f-1da36fb18a41-kube-api-access-b5jqv" (OuterVolumeSpecName: "kube-api-access-b5jqv") pod "16fc436e-ef2a-4aa9-ad5f-1da36fb18a41" (UID: "16fc436e-ef2a-4aa9-ad5f-1da36fb18a41"). InnerVolumeSpecName "kube-api-access-b5jqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:58:29 crc kubenswrapper[4812]: I0216 13:58:29.046880 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16fc436e-ef2a-4aa9-ad5f-1da36fb18a41-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16fc436e-ef2a-4aa9-ad5f-1da36fb18a41" (UID: "16fc436e-ef2a-4aa9-ad5f-1da36fb18a41"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:58:29 crc kubenswrapper[4812]: I0216 13:58:29.049403 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16fc436e-ef2a-4aa9-ad5f-1da36fb18a41-config-data" (OuterVolumeSpecName: "config-data") pod "16fc436e-ef2a-4aa9-ad5f-1da36fb18a41" (UID: "16fc436e-ef2a-4aa9-ad5f-1da36fb18a41"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:58:29 crc kubenswrapper[4812]: I0216 13:58:29.090107 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16fc436e-ef2a-4aa9-ad5f-1da36fb18a41-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:29 crc kubenswrapper[4812]: I0216 13:58:29.090197 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16fc436e-ef2a-4aa9-ad5f-1da36fb18a41-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:29 crc kubenswrapper[4812]: I0216 13:58:29.090221 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5jqv\" (UniqueName: \"kubernetes.io/projected/16fc436e-ef2a-4aa9-ad5f-1da36fb18a41-kube-api-access-b5jqv\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:29 crc kubenswrapper[4812]: I0216 13:58:29.798888 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 13:58:29 crc kubenswrapper[4812]: I0216 13:58:29.869400 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:58:29 crc kubenswrapper[4812]: E0216 13:58:29.885971 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 13:58:29 crc kubenswrapper[4812]: I0216 13:58:29.908935 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:58:29 crc kubenswrapper[4812]: I0216 13:58:29.917061 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:58:29 crc kubenswrapper[4812]: E0216 13:58:29.917915 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16fc436e-ef2a-4aa9-ad5f-1da36fb18a41" containerName="nova-scheduler-scheduler" Feb 16 13:58:29 crc kubenswrapper[4812]: I0216 13:58:29.917954 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="16fc436e-ef2a-4aa9-ad5f-1da36fb18a41" containerName="nova-scheduler-scheduler" Feb 16 13:58:29 crc kubenswrapper[4812]: E0216 13:58:29.918006 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c77cba8e-f37e-4a5f-a795-13999695c004" containerName="nova-manage" Feb 16 13:58:29 crc kubenswrapper[4812]: I0216 13:58:29.918016 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c77cba8e-f37e-4a5f-a795-13999695c004" containerName="nova-manage" Feb 16 13:58:29 crc kubenswrapper[4812]: I0216 13:58:29.918305 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c77cba8e-f37e-4a5f-a795-13999695c004" containerName="nova-manage" Feb 16 13:58:29 crc kubenswrapper[4812]: I0216 13:58:29.918338 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="16fc436e-ef2a-4aa9-ad5f-1da36fb18a41" containerName="nova-scheduler-scheduler" Feb 16 13:58:29 crc kubenswrapper[4812]: I0216 13:58:29.919666 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 13:58:29 crc kubenswrapper[4812]: I0216 13:58:29.924481 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 13:58:29 crc kubenswrapper[4812]: I0216 13:58:29.965012 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:58:30 crc kubenswrapper[4812]: I0216 13:58:30.016153 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c03ccdce-b222-4ef5-be48-9d0ab6465290-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c03ccdce-b222-4ef5-be48-9d0ab6465290\") " pod="openstack/nova-scheduler-0" Feb 16 13:58:30 crc kubenswrapper[4812]: I0216 13:58:30.016600 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psrb5\" (UniqueName: \"kubernetes.io/projected/c03ccdce-b222-4ef5-be48-9d0ab6465290-kube-api-access-psrb5\") pod \"nova-scheduler-0\" (UID: \"c03ccdce-b222-4ef5-be48-9d0ab6465290\") " pod="openstack/nova-scheduler-0" Feb 16 13:58:30 crc kubenswrapper[4812]: I0216 13:58:30.016773 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c03ccdce-b222-4ef5-be48-9d0ab6465290-config-data\") pod \"nova-scheduler-0\" (UID: \"c03ccdce-b222-4ef5-be48-9d0ab6465290\") " pod="openstack/nova-scheduler-0" Feb 16 13:58:30 crc kubenswrapper[4812]: I0216 13:58:30.120251 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psrb5\" (UniqueName: \"kubernetes.io/projected/c03ccdce-b222-4ef5-be48-9d0ab6465290-kube-api-access-psrb5\") pod \"nova-scheduler-0\" (UID: \"c03ccdce-b222-4ef5-be48-9d0ab6465290\") " pod="openstack/nova-scheduler-0" Feb 16 13:58:30 crc kubenswrapper[4812]: I0216 13:58:30.120426 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c03ccdce-b222-4ef5-be48-9d0ab6465290-config-data\") pod \"nova-scheduler-0\" (UID: \"c03ccdce-b222-4ef5-be48-9d0ab6465290\") " pod="openstack/nova-scheduler-0" Feb 16 13:58:30 crc kubenswrapper[4812]: I0216 13:58:30.120517 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c03ccdce-b222-4ef5-be48-9d0ab6465290-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c03ccdce-b222-4ef5-be48-9d0ab6465290\") " pod="openstack/nova-scheduler-0" Feb 16 13:58:30 crc kubenswrapper[4812]: I0216 13:58:30.128142 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c03ccdce-b222-4ef5-be48-9d0ab6465290-config-data\") pod \"nova-scheduler-0\" (UID: \"c03ccdce-b222-4ef5-be48-9d0ab6465290\") " pod="openstack/nova-scheduler-0" Feb 16 13:58:30 crc kubenswrapper[4812]: I0216 13:58:30.128237 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c03ccdce-b222-4ef5-be48-9d0ab6465290-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c03ccdce-b222-4ef5-be48-9d0ab6465290\") " pod="openstack/nova-scheduler-0" Feb 16 13:58:30 crc kubenswrapper[4812]: I0216 13:58:30.154623 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psrb5\" (UniqueName: \"kubernetes.io/projected/c03ccdce-b222-4ef5-be48-9d0ab6465290-kube-api-access-psrb5\") pod \"nova-scheduler-0\" (UID: \"c03ccdce-b222-4ef5-be48-9d0ab6465290\") " pod="openstack/nova-scheduler-0" Feb 16 13:58:30 crc kubenswrapper[4812]: I0216 13:58:30.253668 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 13:58:30 crc kubenswrapper[4812]: I0216 13:58:30.290402 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="ad25a505-b306-47ee-92dc-19b8635d455b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.217:8775/\": read tcp 10.217.0.2:47868->10.217.0.217:8775: read: connection reset by peer" Feb 16 13:58:30 crc kubenswrapper[4812]: I0216 13:58:30.290497 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="ad25a505-b306-47ee-92dc-19b8635d455b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.217:8775/\": read tcp 10.217.0.2:47876->10.217.0.217:8775: read: connection reset by peer" Feb 16 13:58:30 crc kubenswrapper[4812]: I0216 13:58:30.860846 4812 generic.go:334] "Generic (PLEG): container finished" podID="ad25a505-b306-47ee-92dc-19b8635d455b" containerID="5e98e4a545cb2ffe6fcfb23c9b6bb86f3bb9674d1fb7c6bcfe38d80ddb565d57" exitCode=0 Feb 16 13:58:30 crc kubenswrapper[4812]: I0216 13:58:30.861840 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ad25a505-b306-47ee-92dc-19b8635d455b","Type":"ContainerDied","Data":"5e98e4a545cb2ffe6fcfb23c9b6bb86f3bb9674d1fb7c6bcfe38d80ddb565d57"} Feb 16 13:58:30 crc kubenswrapper[4812]: I0216 13:58:30.976659 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 13:58:31 crc kubenswrapper[4812]: I0216 13:58:31.064084 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad25a505-b306-47ee-92dc-19b8635d455b-config-data\") pod \"ad25a505-b306-47ee-92dc-19b8635d455b\" (UID: \"ad25a505-b306-47ee-92dc-19b8635d455b\") " Feb 16 13:58:31 crc kubenswrapper[4812]: I0216 13:58:31.064336 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6v4h5\" (UniqueName: \"kubernetes.io/projected/ad25a505-b306-47ee-92dc-19b8635d455b-kube-api-access-6v4h5\") pod \"ad25a505-b306-47ee-92dc-19b8635d455b\" (UID: \"ad25a505-b306-47ee-92dc-19b8635d455b\") " Feb 16 13:58:31 crc kubenswrapper[4812]: I0216 13:58:31.064436 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad25a505-b306-47ee-92dc-19b8635d455b-combined-ca-bundle\") pod \"ad25a505-b306-47ee-92dc-19b8635d455b\" (UID: \"ad25a505-b306-47ee-92dc-19b8635d455b\") " Feb 16 13:58:31 crc kubenswrapper[4812]: I0216 13:58:31.064648 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad25a505-b306-47ee-92dc-19b8635d455b-logs\") pod \"ad25a505-b306-47ee-92dc-19b8635d455b\" (UID: \"ad25a505-b306-47ee-92dc-19b8635d455b\") " Feb 16 13:58:31 crc kubenswrapper[4812]: I0216 13:58:31.064706 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad25a505-b306-47ee-92dc-19b8635d455b-nova-metadata-tls-certs\") pod \"ad25a505-b306-47ee-92dc-19b8635d455b\" (UID: \"ad25a505-b306-47ee-92dc-19b8635d455b\") " Feb 16 13:58:31 crc kubenswrapper[4812]: I0216 13:58:31.067580 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 13:58:31 crc kubenswrapper[4812]: I0216 13:58:31.074001 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad25a505-b306-47ee-92dc-19b8635d455b-logs" (OuterVolumeSpecName: "logs") pod "ad25a505-b306-47ee-92dc-19b8635d455b" (UID: "ad25a505-b306-47ee-92dc-19b8635d455b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:58:31 crc kubenswrapper[4812]: I0216 13:58:31.098422 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad25a505-b306-47ee-92dc-19b8635d455b-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:31 crc kubenswrapper[4812]: I0216 13:58:31.106503 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad25a505-b306-47ee-92dc-19b8635d455b-kube-api-access-6v4h5" (OuterVolumeSpecName: "kube-api-access-6v4h5") pod "ad25a505-b306-47ee-92dc-19b8635d455b" (UID: "ad25a505-b306-47ee-92dc-19b8635d455b"). InnerVolumeSpecName "kube-api-access-6v4h5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:58:31 crc kubenswrapper[4812]: I0216 13:58:31.170589 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad25a505-b306-47ee-92dc-19b8635d455b-config-data" (OuterVolumeSpecName: "config-data") pod "ad25a505-b306-47ee-92dc-19b8635d455b" (UID: "ad25a505-b306-47ee-92dc-19b8635d455b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:58:31 crc kubenswrapper[4812]: I0216 13:58:31.260345 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad25a505-b306-47ee-92dc-19b8635d455b-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:31 crc kubenswrapper[4812]: I0216 13:58:31.260414 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6v4h5\" (UniqueName: \"kubernetes.io/projected/ad25a505-b306-47ee-92dc-19b8635d455b-kube-api-access-6v4h5\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:31 crc kubenswrapper[4812]: I0216 13:58:31.266731 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad25a505-b306-47ee-92dc-19b8635d455b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ad25a505-b306-47ee-92dc-19b8635d455b" (UID: "ad25a505-b306-47ee-92dc-19b8635d455b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:58:31 crc kubenswrapper[4812]: I0216 13:58:31.312345 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad25a505-b306-47ee-92dc-19b8635d455b-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "ad25a505-b306-47ee-92dc-19b8635d455b" (UID: "ad25a505-b306-47ee-92dc-19b8635d455b"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:58:31 crc kubenswrapper[4812]: I0216 13:58:31.367555 4812 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad25a505-b306-47ee-92dc-19b8635d455b-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:31 crc kubenswrapper[4812]: I0216 13:58:31.367609 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad25a505-b306-47ee-92dc-19b8635d455b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:31 crc kubenswrapper[4812]: I0216 13:58:31.890667 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 13:58:31 crc kubenswrapper[4812]: I0216 13:58:31.914314 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16fc436e-ef2a-4aa9-ad5f-1da36fb18a41" path="/var/lib/kubelet/pods/16fc436e-ef2a-4aa9-ad5f-1da36fb18a41/volumes" Feb 16 13:58:31 crc kubenswrapper[4812]: I0216 13:58:31.915721 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ad25a505-b306-47ee-92dc-19b8635d455b","Type":"ContainerDied","Data":"81893a0981942299fc5d5719d066ad690c026c6ddc21fa378ee51b6cb2ee6b2e"} Feb 16 13:58:31 crc kubenswrapper[4812]: I0216 13:58:31.915766 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c03ccdce-b222-4ef5-be48-9d0ab6465290","Type":"ContainerStarted","Data":"c8d5bfb40717a02085e04da02379e416d19c86c8e2900fad6be0196b24a6de5b"} Feb 16 13:58:31 crc kubenswrapper[4812]: I0216 13:58:31.915783 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c03ccdce-b222-4ef5-be48-9d0ab6465290","Type":"ContainerStarted","Data":"47e870630e9b91591bdddf62ba377fe94cd7f7d4e6e71c88116bd18ed8f85292"} Feb 16 13:58:31 crc kubenswrapper[4812]: I0216 13:58:31.915961 4812 scope.go:117] "RemoveContainer" containerID="5e98e4a545cb2ffe6fcfb23c9b6bb86f3bb9674d1fb7c6bcfe38d80ddb565d57" Feb 16 13:58:31 crc kubenswrapper[4812]: I0216 13:58:31.955852 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.955815969 podStartE2EDuration="2.955815969s" podCreationTimestamp="2026-02-16 13:58:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:58:31.950591028 +0000 UTC m=+1601.014921749" watchObservedRunningTime="2026-02-16 13:58:31.955815969 +0000 UTC m=+1601.020146690" Feb 16 13:58:31 crc kubenswrapper[4812]: I0216 13:58:31.983843 4812 scope.go:117] "RemoveContainer" containerID="745d592566322b077db01ae41e9d70c3784bcc9fe8836eb667517adc56012d7d" Feb 16 13:58:32 crc kubenswrapper[4812]: I0216 13:58:32.047571 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:58:32 crc kubenswrapper[4812]: I0216 13:58:32.061555 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:58:32 crc kubenswrapper[4812]: I0216 13:58:32.080575 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:58:32 crc kubenswrapper[4812]: E0216 13:58:32.081407 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad25a505-b306-47ee-92dc-19b8635d455b" containerName="nova-metadata-log" Feb 16 13:58:32 crc kubenswrapper[4812]: I0216 13:58:32.081463 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad25a505-b306-47ee-92dc-19b8635d455b" containerName="nova-metadata-log" Feb 16 13:58:32 crc kubenswrapper[4812]: E0216 13:58:32.081521 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad25a505-b306-47ee-92dc-19b8635d455b" containerName="nova-metadata-metadata" Feb 16 13:58:32 crc kubenswrapper[4812]: I0216 13:58:32.081532 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad25a505-b306-47ee-92dc-19b8635d455b" containerName="nova-metadata-metadata" Feb 16 13:58:32 crc kubenswrapper[4812]: I0216 13:58:32.081835 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad25a505-b306-47ee-92dc-19b8635d455b" containerName="nova-metadata-log" Feb 16 13:58:32 crc kubenswrapper[4812]: I0216 13:58:32.081884 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad25a505-b306-47ee-92dc-19b8635d455b" containerName="nova-metadata-metadata" Feb 16 13:58:32 crc kubenswrapper[4812]: I0216 13:58:32.083632 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 13:58:32 crc kubenswrapper[4812]: I0216 13:58:32.087186 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 13:58:32 crc kubenswrapper[4812]: I0216 13:58:32.087850 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 13:58:32 crc kubenswrapper[4812]: I0216 13:58:32.096981 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:58:32 crc kubenswrapper[4812]: I0216 13:58:32.191577 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb74f45a-d06e-4770-a282-ea0c7305ef2c-config-data\") pod \"nova-metadata-0\" (UID: \"bb74f45a-d06e-4770-a282-ea0c7305ef2c\") " pod="openstack/nova-metadata-0" Feb 16 13:58:32 crc kubenswrapper[4812]: I0216 13:58:32.191777 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb74f45a-d06e-4770-a282-ea0c7305ef2c-logs\") pod \"nova-metadata-0\" (UID: \"bb74f45a-d06e-4770-a282-ea0c7305ef2c\") " pod="openstack/nova-metadata-0" Feb 16 13:58:32 crc kubenswrapper[4812]: I0216 13:58:32.191882 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb74f45a-d06e-4770-a282-ea0c7305ef2c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bb74f45a-d06e-4770-a282-ea0c7305ef2c\") " pod="openstack/nova-metadata-0" Feb 16 13:58:32 crc kubenswrapper[4812]: I0216 13:58:32.192371 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb74f45a-d06e-4770-a282-ea0c7305ef2c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bb74f45a-d06e-4770-a282-ea0c7305ef2c\") " pod="openstack/nova-metadata-0" Feb 16 13:58:32 crc kubenswrapper[4812]: I0216 13:58:32.192516 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fdxn\" (UniqueName: \"kubernetes.io/projected/bb74f45a-d06e-4770-a282-ea0c7305ef2c-kube-api-access-6fdxn\") pod \"nova-metadata-0\" (UID: \"bb74f45a-d06e-4770-a282-ea0c7305ef2c\") " pod="openstack/nova-metadata-0" Feb 16 13:58:32 crc kubenswrapper[4812]: I0216 13:58:32.296393 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb74f45a-d06e-4770-a282-ea0c7305ef2c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bb74f45a-d06e-4770-a282-ea0c7305ef2c\") " pod="openstack/nova-metadata-0" Feb 16 13:58:32 crc kubenswrapper[4812]: I0216 13:58:32.297108 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fdxn\" (UniqueName: \"kubernetes.io/projected/bb74f45a-d06e-4770-a282-ea0c7305ef2c-kube-api-access-6fdxn\") pod \"nova-metadata-0\" (UID: \"bb74f45a-d06e-4770-a282-ea0c7305ef2c\") " pod="openstack/nova-metadata-0" Feb 16 13:58:32 crc kubenswrapper[4812]: I0216 13:58:32.297266 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb74f45a-d06e-4770-a282-ea0c7305ef2c-config-data\") pod \"nova-metadata-0\" (UID: \"bb74f45a-d06e-4770-a282-ea0c7305ef2c\") " pod="openstack/nova-metadata-0" Feb 16 13:58:32 crc kubenswrapper[4812]: I0216 13:58:32.297483 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb74f45a-d06e-4770-a282-ea0c7305ef2c-logs\") pod \"nova-metadata-0\" (UID: \"bb74f45a-d06e-4770-a282-ea0c7305ef2c\") " pod="openstack/nova-metadata-0" Feb 16 13:58:32 crc kubenswrapper[4812]: I0216 13:58:32.297589 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb74f45a-d06e-4770-a282-ea0c7305ef2c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bb74f45a-d06e-4770-a282-ea0c7305ef2c\") " pod="openstack/nova-metadata-0" Feb 16 13:58:32 crc kubenswrapper[4812]: I0216 13:58:32.298187 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb74f45a-d06e-4770-a282-ea0c7305ef2c-logs\") pod \"nova-metadata-0\" (UID: \"bb74f45a-d06e-4770-a282-ea0c7305ef2c\") " pod="openstack/nova-metadata-0" Feb 16 13:58:32 crc kubenswrapper[4812]: I0216 13:58:32.306376 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb74f45a-d06e-4770-a282-ea0c7305ef2c-config-data\") pod \"nova-metadata-0\" (UID: \"bb74f45a-d06e-4770-a282-ea0c7305ef2c\") " pod="openstack/nova-metadata-0" Feb 16 13:58:32 crc kubenswrapper[4812]: I0216 13:58:32.306432 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb74f45a-d06e-4770-a282-ea0c7305ef2c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bb74f45a-d06e-4770-a282-ea0c7305ef2c\") " pod="openstack/nova-metadata-0" Feb 16 13:58:32 crc kubenswrapper[4812]: I0216 13:58:32.306723 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb74f45a-d06e-4770-a282-ea0c7305ef2c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bb74f45a-d06e-4770-a282-ea0c7305ef2c\") " pod="openstack/nova-metadata-0" Feb 16 13:58:32 crc kubenswrapper[4812]: I0216 13:58:32.320072 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fdxn\" (UniqueName: \"kubernetes.io/projected/bb74f45a-d06e-4770-a282-ea0c7305ef2c-kube-api-access-6fdxn\") pod \"nova-metadata-0\" (UID: \"bb74f45a-d06e-4770-a282-ea0c7305ef2c\") " pod="openstack/nova-metadata-0" Feb 16 13:58:32 crc kubenswrapper[4812]: I0216 13:58:32.413675 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 13:58:33 crc kubenswrapper[4812]: I0216 13:58:33.003590 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 13:58:33 crc kubenswrapper[4812]: W0216 13:58:33.006502 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb74f45a_d06e_4770_a282_ea0c7305ef2c.slice/crio-db784adeb0302fc4d25b423ffdd5edb5a729aa59af433ff9b4abc34d2d4335d6 WatchSource:0}: Error finding container db784adeb0302fc4d25b423ffdd5edb5a729aa59af433ff9b4abc34d2d4335d6: Status 404 returned error can't find the container with id db784adeb0302fc4d25b423ffdd5edb5a729aa59af433ff9b4abc34d2d4335d6 Feb 16 13:58:34 crc kubenswrapper[4812]: I0216 13:58:34.003847 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad25a505-b306-47ee-92dc-19b8635d455b" path="/var/lib/kubelet/pods/ad25a505-b306-47ee-92dc-19b8635d455b/volumes" Feb 16 13:58:34 crc kubenswrapper[4812]: I0216 13:58:34.006842 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb74f45a-d06e-4770-a282-ea0c7305ef2c","Type":"ContainerStarted","Data":"ac4271ff22129a224d76dde0da3f80c71eb9944bb9f58e830c29130c6ef939ec"} Feb 16 13:58:34 crc kubenswrapper[4812]: I0216 13:58:34.006900 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb74f45a-d06e-4770-a282-ea0c7305ef2c","Type":"ContainerStarted","Data":"f6eda552bb0c958d4f73032989ab4f1879724bef5e8fc15e80eed31278a27bd2"} Feb 16 13:58:34 crc kubenswrapper[4812]: I0216 13:58:34.006915 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb74f45a-d06e-4770-a282-ea0c7305ef2c","Type":"ContainerStarted","Data":"db784adeb0302fc4d25b423ffdd5edb5a729aa59af433ff9b4abc34d2d4335d6"} Feb 16 13:58:34 crc kubenswrapper[4812]: I0216 13:58:34.007752 4812 generic.go:334] "Generic (PLEG): container finished" podID="45ac10bb-0132-41c8-9f99-4c5a266ece13" containerID="9495d8dee6709ab51431f1ae08f915cc1e416daee59067c145911641531d36cc" exitCode=0 Feb 16 13:58:34 crc kubenswrapper[4812]: I0216 13:58:34.007803 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"45ac10bb-0132-41c8-9f99-4c5a266ece13","Type":"ContainerDied","Data":"9495d8dee6709ab51431f1ae08f915cc1e416daee59067c145911641531d36cc"} Feb 16 13:58:34 crc kubenswrapper[4812]: I0216 13:58:34.047958 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 13:58:34 crc kubenswrapper[4812]: I0216 13:58:34.053870 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.053823297 podStartE2EDuration="3.053823297s" podCreationTimestamp="2026-02-16 13:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:58:34.036366693 +0000 UTC m=+1603.100697394" watchObservedRunningTime="2026-02-16 13:58:34.053823297 +0000 UTC m=+1603.118154008" Feb 16 13:58:34 crc kubenswrapper[4812]: I0216 13:58:34.168754 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45ac10bb-0132-41c8-9f99-4c5a266ece13-config-data\") pod \"45ac10bb-0132-41c8-9f99-4c5a266ece13\" (UID: \"45ac10bb-0132-41c8-9f99-4c5a266ece13\") " Feb 16 13:58:34 crc kubenswrapper[4812]: I0216 13:58:34.168819 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45ac10bb-0132-41c8-9f99-4c5a266ece13-logs\") pod \"45ac10bb-0132-41c8-9f99-4c5a266ece13\" (UID: \"45ac10bb-0132-41c8-9f99-4c5a266ece13\") " Feb 16 13:58:34 crc kubenswrapper[4812]: I0216 13:58:34.168872 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45ac10bb-0132-41c8-9f99-4c5a266ece13-combined-ca-bundle\") pod \"45ac10bb-0132-41c8-9f99-4c5a266ece13\" (UID: \"45ac10bb-0132-41c8-9f99-4c5a266ece13\") " Feb 16 13:58:34 crc kubenswrapper[4812]: I0216 13:58:34.168924 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvtxb\" (UniqueName: \"kubernetes.io/projected/45ac10bb-0132-41c8-9f99-4c5a266ece13-kube-api-access-tvtxb\") pod \"45ac10bb-0132-41c8-9f99-4c5a266ece13\" (UID: \"45ac10bb-0132-41c8-9f99-4c5a266ece13\") " Feb 16 13:58:34 crc kubenswrapper[4812]: I0216 13:58:34.169059 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45ac10bb-0132-41c8-9f99-4c5a266ece13-internal-tls-certs\") pod \"45ac10bb-0132-41c8-9f99-4c5a266ece13\" (UID: \"45ac10bb-0132-41c8-9f99-4c5a266ece13\") " Feb 16 13:58:34 crc kubenswrapper[4812]: I0216 13:58:34.169468 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45ac10bb-0132-41c8-9f99-4c5a266ece13-public-tls-certs\") pod \"45ac10bb-0132-41c8-9f99-4c5a266ece13\" (UID: \"45ac10bb-0132-41c8-9f99-4c5a266ece13\") " Feb 16 13:58:34 crc kubenswrapper[4812]: I0216 13:58:34.175611 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45ac10bb-0132-41c8-9f99-4c5a266ece13-logs" (OuterVolumeSpecName: "logs") pod "45ac10bb-0132-41c8-9f99-4c5a266ece13" (UID: "45ac10bb-0132-41c8-9f99-4c5a266ece13"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 13:58:34 crc kubenswrapper[4812]: I0216 13:58:34.183226 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45ac10bb-0132-41c8-9f99-4c5a266ece13-kube-api-access-tvtxb" (OuterVolumeSpecName: "kube-api-access-tvtxb") pod "45ac10bb-0132-41c8-9f99-4c5a266ece13" (UID: "45ac10bb-0132-41c8-9f99-4c5a266ece13"). InnerVolumeSpecName "kube-api-access-tvtxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 13:58:34 crc kubenswrapper[4812]: I0216 13:58:34.218157 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45ac10bb-0132-41c8-9f99-4c5a266ece13-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45ac10bb-0132-41c8-9f99-4c5a266ece13" (UID: "45ac10bb-0132-41c8-9f99-4c5a266ece13"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:58:34 crc kubenswrapper[4812]: I0216 13:58:34.230476 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45ac10bb-0132-41c8-9f99-4c5a266ece13-config-data" (OuterVolumeSpecName: "config-data") pod "45ac10bb-0132-41c8-9f99-4c5a266ece13" (UID: "45ac10bb-0132-41c8-9f99-4c5a266ece13"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:58:34 crc kubenswrapper[4812]: I0216 13:58:34.255152 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45ac10bb-0132-41c8-9f99-4c5a266ece13-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "45ac10bb-0132-41c8-9f99-4c5a266ece13" (UID: "45ac10bb-0132-41c8-9f99-4c5a266ece13"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:58:34 crc kubenswrapper[4812]: I0216 13:58:34.267404 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45ac10bb-0132-41c8-9f99-4c5a266ece13-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "45ac10bb-0132-41c8-9f99-4c5a266ece13" (UID: "45ac10bb-0132-41c8-9f99-4c5a266ece13"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 13:58:34 crc kubenswrapper[4812]: I0216 13:58:34.272845 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvtxb\" (UniqueName: \"kubernetes.io/projected/45ac10bb-0132-41c8-9f99-4c5a266ece13-kube-api-access-tvtxb\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:34 crc kubenswrapper[4812]: I0216 13:58:34.272903 4812 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45ac10bb-0132-41c8-9f99-4c5a266ece13-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:34 crc kubenswrapper[4812]: I0216 13:58:34.272916 4812 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45ac10bb-0132-41c8-9f99-4c5a266ece13-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:34 crc kubenswrapper[4812]: I0216 13:58:34.272931 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45ac10bb-0132-41c8-9f99-4c5a266ece13-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:34 crc kubenswrapper[4812]: I0216 13:58:34.272952 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45ac10bb-0132-41c8-9f99-4c5a266ece13-logs\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:34 crc kubenswrapper[4812]: I0216 13:58:34.272966 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45ac10bb-0132-41c8-9f99-4c5a266ece13-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 13:58:35 crc kubenswrapper[4812]: I0216 13:58:35.027349 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"45ac10bb-0132-41c8-9f99-4c5a266ece13","Type":"ContainerDied","Data":"9277a7f93da99a46de0ef2da93da3f85a78cf696b44ed7ec4ec99b57497e9fe3"} Feb 16 13:58:35 crc kubenswrapper[4812]: I0216 13:58:35.027461 4812 scope.go:117] "RemoveContainer" containerID="9495d8dee6709ab51431f1ae08f915cc1e416daee59067c145911641531d36cc" Feb 16 13:58:35 crc kubenswrapper[4812]: I0216 13:58:35.027432 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 13:58:35 crc kubenswrapper[4812]: I0216 13:58:35.084901 4812 scope.go:117] "RemoveContainer" containerID="300c5202473b0ce968aad65431a1b75acbbb464462962ba2653f8632fe0356d7" Feb 16 13:58:35 crc kubenswrapper[4812]: I0216 13:58:35.091453 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:58:35 crc kubenswrapper[4812]: I0216 13:58:35.116398 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:58:35 crc kubenswrapper[4812]: I0216 13:58:35.134837 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 13:58:35 crc kubenswrapper[4812]: E0216 13:58:35.135681 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45ac10bb-0132-41c8-9f99-4c5a266ece13" containerName="nova-api-api" Feb 16 13:58:35 crc kubenswrapper[4812]: I0216 13:58:35.135710 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="45ac10bb-0132-41c8-9f99-4c5a266ece13" containerName="nova-api-api" Feb 16 13:58:35 crc kubenswrapper[4812]: E0216 13:58:35.135741 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45ac10bb-0132-41c8-9f99-4c5a266ece13" containerName="nova-api-log" Feb 16 13:58:35 crc kubenswrapper[4812]: I0216 13:58:35.135754 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="45ac10bb-0132-41c8-9f99-4c5a266ece13" containerName="nova-api-log" Feb 16 13:58:35 crc kubenswrapper[4812]: I0216 13:58:35.136007 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="45ac10bb-0132-41c8-9f99-4c5a266ece13" containerName="nova-api-log" Feb 16 13:58:35 crc kubenswrapper[4812]: I0216 13:58:35.136035 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="45ac10bb-0132-41c8-9f99-4c5a266ece13" containerName="nova-api-api" Feb 16 13:58:35 crc kubenswrapper[4812]: I0216 13:58:35.137761 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 13:58:35 crc kubenswrapper[4812]: I0216 13:58:35.141290 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 13:58:35 crc kubenswrapper[4812]: I0216 13:58:35.141562 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 16 13:58:35 crc kubenswrapper[4812]: I0216 13:58:35.141782 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 16 13:58:35 crc kubenswrapper[4812]: I0216 13:58:35.167704 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:58:35 crc kubenswrapper[4812]: I0216 13:58:35.200957 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce0b3ece-701b-4853-ace9-e21f7a68fc31-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ce0b3ece-701b-4853-ace9-e21f7a68fc31\") " pod="openstack/nova-api-0" Feb 16 13:58:35 crc kubenswrapper[4812]: I0216 13:58:35.201054 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce0b3ece-701b-4853-ace9-e21f7a68fc31-logs\") pod \"nova-api-0\" (UID: \"ce0b3ece-701b-4853-ace9-e21f7a68fc31\") " pod="openstack/nova-api-0" Feb 16 13:58:35 crc kubenswrapper[4812]: I0216 13:58:35.201138 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce0b3ece-701b-4853-ace9-e21f7a68fc31-config-data\") pod \"nova-api-0\" (UID: \"ce0b3ece-701b-4853-ace9-e21f7a68fc31\") " pod="openstack/nova-api-0" Feb 16 13:58:35 crc kubenswrapper[4812]: I0216 13:58:35.201420 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps4p9\" (UniqueName: \"kubernetes.io/projected/ce0b3ece-701b-4853-ace9-e21f7a68fc31-kube-api-access-ps4p9\") pod \"nova-api-0\" (UID: \"ce0b3ece-701b-4853-ace9-e21f7a68fc31\") " pod="openstack/nova-api-0" Feb 16 13:58:35 crc kubenswrapper[4812]: I0216 13:58:35.201657 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce0b3ece-701b-4853-ace9-e21f7a68fc31-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ce0b3ece-701b-4853-ace9-e21f7a68fc31\") " pod="openstack/nova-api-0" Feb 16 13:58:35 crc kubenswrapper[4812]: I0216 13:58:35.202407 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce0b3ece-701b-4853-ace9-e21f7a68fc31-public-tls-certs\") pod \"nova-api-0\" (UID: \"ce0b3ece-701b-4853-ace9-e21f7a68fc31\") " pod="openstack/nova-api-0" Feb 16 13:58:35 crc kubenswrapper[4812]: I0216 13:58:35.255305 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 13:58:35 crc kubenswrapper[4812]: I0216 13:58:35.305127 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce0b3ece-701b-4853-ace9-e21f7a68fc31-public-tls-certs\") pod \"nova-api-0\" (UID: \"ce0b3ece-701b-4853-ace9-e21f7a68fc31\") " pod="openstack/nova-api-0" Feb 16 13:58:35 crc kubenswrapper[4812]: I0216 13:58:35.305268 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce0b3ece-701b-4853-ace9-e21f7a68fc31-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ce0b3ece-701b-4853-ace9-e21f7a68fc31\") " pod="openstack/nova-api-0" Feb 16 13:58:35 crc kubenswrapper[4812]: I0216 13:58:35.305306 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce0b3ece-701b-4853-ace9-e21f7a68fc31-logs\") pod \"nova-api-0\" (UID: \"ce0b3ece-701b-4853-ace9-e21f7a68fc31\") " pod="openstack/nova-api-0" Feb 16 13:58:35 crc kubenswrapper[4812]: I0216 13:58:35.305358 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce0b3ece-701b-4853-ace9-e21f7a68fc31-config-data\") pod \"nova-api-0\" (UID: \"ce0b3ece-701b-4853-ace9-e21f7a68fc31\") " pod="openstack/nova-api-0" Feb 16 13:58:35 crc kubenswrapper[4812]: I0216 13:58:35.305415 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps4p9\" (UniqueName: \"kubernetes.io/projected/ce0b3ece-701b-4853-ace9-e21f7a68fc31-kube-api-access-ps4p9\") pod \"nova-api-0\" (UID: \"ce0b3ece-701b-4853-ace9-e21f7a68fc31\") " pod="openstack/nova-api-0" Feb 16 13:58:35 crc kubenswrapper[4812]: I0216 13:58:35.305508 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce0b3ece-701b-4853-ace9-e21f7a68fc31-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ce0b3ece-701b-4853-ace9-e21f7a68fc31\") " pod="openstack/nova-api-0" Feb 16 13:58:35 crc kubenswrapper[4812]: I0216 13:58:35.306108 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce0b3ece-701b-4853-ace9-e21f7a68fc31-logs\") pod \"nova-api-0\" (UID: \"ce0b3ece-701b-4853-ace9-e21f7a68fc31\") " pod="openstack/nova-api-0" Feb 16 13:58:35 crc kubenswrapper[4812]: I0216 13:58:35.314914 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce0b3ece-701b-4853-ace9-e21f7a68fc31-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ce0b3ece-701b-4853-ace9-e21f7a68fc31\") " pod="openstack/nova-api-0" Feb 16 13:58:35 crc kubenswrapper[4812]: I0216 13:58:35.316059 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce0b3ece-701b-4853-ace9-e21f7a68fc31-public-tls-certs\") pod \"nova-api-0\" (UID: \"ce0b3ece-701b-4853-ace9-e21f7a68fc31\") " pod="openstack/nova-api-0" Feb 16 13:58:35 crc kubenswrapper[4812]: I0216 13:58:35.316382 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce0b3ece-701b-4853-ace9-e21f7a68fc31-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ce0b3ece-701b-4853-ace9-e21f7a68fc31\") " pod="openstack/nova-api-0" Feb 16 13:58:35 crc kubenswrapper[4812]: I0216 13:58:35.323506 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce0b3ece-701b-4853-ace9-e21f7a68fc31-config-data\") pod \"nova-api-0\" (UID: \"ce0b3ece-701b-4853-ace9-e21f7a68fc31\") " pod="openstack/nova-api-0" Feb 16 13:58:35 crc kubenswrapper[4812]: I0216 13:58:35.328985 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps4p9\" (UniqueName: \"kubernetes.io/projected/ce0b3ece-701b-4853-ace9-e21f7a68fc31-kube-api-access-ps4p9\") pod \"nova-api-0\" (UID: \"ce0b3ece-701b-4853-ace9-e21f7a68fc31\") " pod="openstack/nova-api-0" Feb 16 13:58:35 crc kubenswrapper[4812]: I0216 13:58:35.468511 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 13:58:35 crc kubenswrapper[4812]: I0216 13:58:35.895285 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45ac10bb-0132-41c8-9f99-4c5a266ece13" path="/var/lib/kubelet/pods/45ac10bb-0132-41c8-9f99-4c5a266ece13/volumes" Feb 16 13:58:36 crc kubenswrapper[4812]: I0216 13:58:36.032934 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 13:58:36 crc kubenswrapper[4812]: I0216 13:58:36.880233 4812 scope.go:117] "RemoveContainer" containerID="fe1a8ada12fd81917eb3f115f0ed57838961e98daadca433ae78ae9036026fef" Feb 16 13:58:36 crc kubenswrapper[4812]: E0216 13:58:36.881378 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 13:58:37 crc kubenswrapper[4812]: I0216 13:58:37.069313 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ce0b3ece-701b-4853-ace9-e21f7a68fc31","Type":"ContainerStarted","Data":"671771a56c9c4bf6beaf8de30b3cd695b9f64a9364e3fed10292591200ffff82"} Feb 16 13:58:37 crc kubenswrapper[4812]: I0216 13:58:37.069419 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ce0b3ece-701b-4853-ace9-e21f7a68fc31","Type":"ContainerStarted","Data":"ed00fa851bec88845425b2df1ed54275d1c9097b28a5ca1a5c869c78231dd780"} Feb 16 13:58:37 crc kubenswrapper[4812]: I0216 13:58:37.069432 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ce0b3ece-701b-4853-ace9-e21f7a68fc31","Type":"ContainerStarted","Data":"b4d108830002a69306619090bee724d85d551bd60e3aaf4286f7532f345517d0"} Feb 16 13:58:37 crc kubenswrapper[4812]: I0216 13:58:37.122526 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.122490056 podStartE2EDuration="2.122490056s" podCreationTimestamp="2026-02-16 13:58:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 13:58:37.108255284 +0000 UTC m=+1606.172585995" watchObservedRunningTime="2026-02-16 13:58:37.122490056 +0000 UTC m=+1606.186820757" Feb 16 13:58:37 crc kubenswrapper[4812]: I0216 13:58:37.415250 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 13:58:37 crc kubenswrapper[4812]: I0216 13:58:37.415378 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 13:58:40 crc kubenswrapper[4812]: I0216 13:58:40.254898 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 13:58:40 crc kubenswrapper[4812]: I0216 13:58:40.289897 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 13:58:41 crc kubenswrapper[4812]: I0216 13:58:41.162269 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 13:58:42 crc kubenswrapper[4812]: I0216 13:58:42.415801 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 13:58:42 crc kubenswrapper[4812]: I0216 13:58:42.415896 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 13:58:43 crc kubenswrapper[4812]: I0216 13:58:43.432807 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="bb74f45a-d06e-4770-a282-ea0c7305ef2c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.229:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 13:58:43 crc kubenswrapper[4812]: I0216 13:58:43.432845 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="bb74f45a-d06e-4770-a282-ea0c7305ef2c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.229:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 13:58:44 crc kubenswrapper[4812]: E0216 13:58:44.883174 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 13:58:45 crc kubenswrapper[4812]: I0216 13:58:45.469548 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 13:58:45 crc kubenswrapper[4812]: I0216 13:58:45.469643 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 13:58:46 crc kubenswrapper[4812]: I0216 13:58:46.483922 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ce0b3ece-701b-4853-ace9-e21f7a68fc31" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.230:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 13:58:46 crc kubenswrapper[4812]: I0216 13:58:46.483922 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ce0b3ece-701b-4853-ace9-e21f7a68fc31" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.230:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 13:58:47 crc kubenswrapper[4812]: I0216 13:58:47.879981 4812 scope.go:117] "RemoveContainer" containerID="fe1a8ada12fd81917eb3f115f0ed57838961e98daadca433ae78ae9036026fef" Feb 16 13:58:47 crc kubenswrapper[4812]: E0216 13:58:47.881127 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 13:58:52 crc kubenswrapper[4812]: I0216 13:58:52.432226 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 13:58:52 crc kubenswrapper[4812]: I0216 13:58:52.441538 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 13:58:52 crc kubenswrapper[4812]: I0216 13:58:52.445935 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 13:58:53 crc kubenswrapper[4812]: I0216 13:58:53.155597 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 16 13:58:53 crc kubenswrapper[4812]: I0216 13:58:53.308673 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 13:58:55 crc kubenswrapper[4812]: I0216 13:58:55.480506 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 13:58:55 crc kubenswrapper[4812]: I0216 13:58:55.481504 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 13:58:55 crc kubenswrapper[4812]: I0216 13:58:55.483412 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 13:58:55 crc kubenswrapper[4812]: I0216 13:58:55.491575 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 13:58:55 crc kubenswrapper[4812]: E0216 13:58:55.893014 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 13:58:56 crc kubenswrapper[4812]: I0216 13:58:56.364647 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 13:58:56 crc kubenswrapper[4812]: I0216 13:58:56.379160 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 13:58:59 crc kubenswrapper[4812]: I0216 13:58:59.879079 4812 scope.go:117] "RemoveContainer" containerID="fe1a8ada12fd81917eb3f115f0ed57838961e98daadca433ae78ae9036026fef" Feb 16 13:58:59 crc kubenswrapper[4812]: E0216 13:58:59.880206 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 13:59:05 crc kubenswrapper[4812]: I0216 13:59:05.852594 4812 scope.go:117] "RemoveContainer" containerID="7b60975c6cf3122e703aa830322893a78a35864c8197a1b883e66c3f41e8d577" Feb 16 13:59:10 crc kubenswrapper[4812]: E0216 13:59:10.882289 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 13:59:13 crc kubenswrapper[4812]: I0216 13:59:13.882259 4812 scope.go:117] "RemoveContainer" containerID="fe1a8ada12fd81917eb3f115f0ed57838961e98daadca433ae78ae9036026fef" Feb 16 13:59:13 crc kubenswrapper[4812]: E0216 13:59:13.883483 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 13:59:24 crc kubenswrapper[4812]: E0216 13:59:24.883338 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 13:59:27 crc kubenswrapper[4812]: I0216 13:59:27.880716 4812 scope.go:117] "RemoveContainer" containerID="fe1a8ada12fd81917eb3f115f0ed57838961e98daadca433ae78ae9036026fef" Feb 16 13:59:27 crc kubenswrapper[4812]: E0216 13:59:27.882029 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 13:59:37 crc kubenswrapper[4812]: E0216 13:59:37.883300 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 13:59:39 crc kubenswrapper[4812]: I0216 13:59:39.879068 4812 scope.go:117] "RemoveContainer" containerID="fe1a8ada12fd81917eb3f115f0ed57838961e98daadca433ae78ae9036026fef" Feb 16 13:59:39 crc kubenswrapper[4812]: E0216 13:59:39.879950 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 13:59:52 crc kubenswrapper[4812]: I0216 13:59:52.119752 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 13:59:52 crc kubenswrapper[4812]: E0216 13:59:52.232242 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 13:59:52 crc kubenswrapper[4812]: E0216 13:59:52.232313 4812 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 13:59:52 crc kubenswrapper[4812]: E0216 13:59:52.232475 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mq54s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-krnzs_openstack(a7d4eae6-781f-4675-a6c3-ee0f1589c735): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 13:59:52 crc kubenswrapper[4812]: E0216 13:59:52.234430 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 13:59:53 crc kubenswrapper[4812]: I0216 13:59:53.879605 4812 scope.go:117] "RemoveContainer" containerID="fe1a8ada12fd81917eb3f115f0ed57838961e98daadca433ae78ae9036026fef" Feb 16 13:59:53 crc kubenswrapper[4812]: E0216 13:59:53.880384 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:00:00 crc kubenswrapper[4812]: I0216 14:00:00.175044 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520840-fqw7m"] Feb 16 14:00:00 crc kubenswrapper[4812]: I0216 14:00:00.179410 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-fqw7m" Feb 16 14:00:00 crc kubenswrapper[4812]: I0216 14:00:00.184568 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 14:00:00 crc kubenswrapper[4812]: I0216 14:00:00.185629 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 14:00:00 crc kubenswrapper[4812]: I0216 14:00:00.205700 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520840-fqw7m"] Feb 16 14:00:00 crc kubenswrapper[4812]: I0216 14:00:00.217335 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a-config-volume\") pod \"collect-profiles-29520840-fqw7m\" (UID: \"4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-fqw7m" Feb 16 14:00:00 crc kubenswrapper[4812]: I0216 14:00:00.217471 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a-secret-volume\") pod \"collect-profiles-29520840-fqw7m\" (UID: \"4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-fqw7m" Feb 16 14:00:00 crc kubenswrapper[4812]: I0216 14:00:00.217549 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtbww\" (UniqueName: \"kubernetes.io/projected/4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a-kube-api-access-xtbww\") pod \"collect-profiles-29520840-fqw7m\" (UID: \"4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-fqw7m" Feb 16 14:00:00 crc kubenswrapper[4812]: I0216 14:00:00.321757 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a-config-volume\") pod \"collect-profiles-29520840-fqw7m\" (UID: \"4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-fqw7m" Feb 16 14:00:00 crc kubenswrapper[4812]: I0216 14:00:00.321839 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a-secret-volume\") pod \"collect-profiles-29520840-fqw7m\" (UID: \"4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-fqw7m" Feb 16 14:00:00 crc kubenswrapper[4812]: I0216 14:00:00.321905 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtbww\" (UniqueName: \"kubernetes.io/projected/4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a-kube-api-access-xtbww\") pod \"collect-profiles-29520840-fqw7m\" (UID: \"4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-fqw7m" Feb 16 14:00:00 crc kubenswrapper[4812]: I0216 14:00:00.323865 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a-config-volume\") pod \"collect-profiles-29520840-fqw7m\" (UID: \"4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-fqw7m" Feb 16 14:00:00 crc kubenswrapper[4812]: I0216 14:00:00.342834 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a-secret-volume\") pod \"collect-profiles-29520840-fqw7m\" (UID: \"4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-fqw7m" Feb 16 14:00:00 crc kubenswrapper[4812]: I0216 14:00:00.345294 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtbww\" (UniqueName: \"kubernetes.io/projected/4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a-kube-api-access-xtbww\") pod \"collect-profiles-29520840-fqw7m\" (UID: \"4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-fqw7m" Feb 16 14:00:00 crc kubenswrapper[4812]: I0216 14:00:00.513927 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-fqw7m" Feb 16 14:00:01 crc kubenswrapper[4812]: I0216 14:00:01.153023 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520840-fqw7m"] Feb 16 14:00:01 crc kubenswrapper[4812]: I0216 14:00:01.269626 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-fqw7m" event={"ID":"4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a","Type":"ContainerStarted","Data":"bfe731258863da755dadeedcfaf9835f88743e925a42973255ed148fac9c2f6f"} Feb 16 14:00:02 crc kubenswrapper[4812]: I0216 14:00:02.285477 4812 generic.go:334] "Generic (PLEG): container finished" podID="4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a" containerID="2d54a4e91e79f10444c594f4a3f9c52cfc27cf1ddd27805971f2010edf009770" exitCode=0 Feb 16 14:00:02 crc kubenswrapper[4812]: I0216 14:00:02.285505 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-fqw7m" event={"ID":"4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a","Type":"ContainerDied","Data":"2d54a4e91e79f10444c594f4a3f9c52cfc27cf1ddd27805971f2010edf009770"} Feb 16 14:00:02 crc kubenswrapper[4812]: E0216 14:00:02.880700 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:00:03 crc kubenswrapper[4812]: I0216 14:00:03.776462 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-fqw7m" Feb 16 14:00:03 crc kubenswrapper[4812]: I0216 14:00:03.832996 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a-secret-volume\") pod \"4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a\" (UID: \"4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a\") " Feb 16 14:00:03 crc kubenswrapper[4812]: I0216 14:00:03.833553 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtbww\" (UniqueName: \"kubernetes.io/projected/4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a-kube-api-access-xtbww\") pod \"4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a\" (UID: \"4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a\") " Feb 16 14:00:03 crc kubenswrapper[4812]: I0216 14:00:03.833911 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a-config-volume\") pod \"4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a\" (UID: \"4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a\") " Feb 16 14:00:03 crc kubenswrapper[4812]: I0216 14:00:03.835730 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a-config-volume" (OuterVolumeSpecName: "config-volume") pod "4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a" (UID: "4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:00:03 crc kubenswrapper[4812]: I0216 14:00:03.844756 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a" (UID: "4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:00:03 crc kubenswrapper[4812]: I0216 14:00:03.848848 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a-kube-api-access-xtbww" (OuterVolumeSpecName: "kube-api-access-xtbww") pod "4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a" (UID: "4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a"). InnerVolumeSpecName "kube-api-access-xtbww". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:00:03 crc kubenswrapper[4812]: I0216 14:00:03.942725 4812 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 14:00:03 crc kubenswrapper[4812]: I0216 14:00:03.942761 4812 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 14:00:03 crc kubenswrapper[4812]: I0216 14:00:03.942773 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtbww\" (UniqueName: \"kubernetes.io/projected/4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a-kube-api-access-xtbww\") on node \"crc\" DevicePath \"\"" Feb 16 14:00:04 crc kubenswrapper[4812]: I0216 14:00:04.309412 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-fqw7m" event={"ID":"4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a","Type":"ContainerDied","Data":"bfe731258863da755dadeedcfaf9835f88743e925a42973255ed148fac9c2f6f"} Feb 16 14:00:04 crc kubenswrapper[4812]: I0216 14:00:04.310082 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfe731258863da755dadeedcfaf9835f88743e925a42973255ed148fac9c2f6f" Feb 16 14:00:04 crc kubenswrapper[4812]: I0216 14:00:04.309539 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520840-fqw7m" Feb 16 14:00:06 crc kubenswrapper[4812]: I0216 14:00:06.066028 4812 scope.go:117] "RemoveContainer" containerID="ae91021ff1ab83203f88cd169795174a4f1816886a22a8fb2e0e2791cf4af841" Feb 16 14:00:06 crc kubenswrapper[4812]: I0216 14:00:06.126629 4812 scope.go:117] "RemoveContainer" containerID="5488a4fd597a3432eeae73392d5709fbf6bab4e90c2dcef7da2708de25c2d98e" Feb 16 14:00:08 crc kubenswrapper[4812]: I0216 14:00:08.880298 4812 scope.go:117] "RemoveContainer" containerID="fe1a8ada12fd81917eb3f115f0ed57838961e98daadca433ae78ae9036026fef" Feb 16 14:00:08 crc kubenswrapper[4812]: E0216 14:00:08.881607 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:00:14 crc kubenswrapper[4812]: E0216 14:00:14.885677 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:00:20 crc kubenswrapper[4812]: I0216 14:00:20.879277 4812 scope.go:117] "RemoveContainer" containerID="fe1a8ada12fd81917eb3f115f0ed57838961e98daadca433ae78ae9036026fef" Feb 16 14:00:20 crc kubenswrapper[4812]: E0216 14:00:20.880078 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:00:28 crc kubenswrapper[4812]: E0216 14:00:28.882943 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:00:32 crc kubenswrapper[4812]: I0216 14:00:32.879666 4812 scope.go:117] "RemoveContainer" containerID="fe1a8ada12fd81917eb3f115f0ed57838961e98daadca433ae78ae9036026fef" Feb 16 14:00:32 crc kubenswrapper[4812]: E0216 14:00:32.880490 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:00:42 crc kubenswrapper[4812]: E0216 14:00:42.883705 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:00:43 crc kubenswrapper[4812]: I0216 14:00:43.879170 4812 scope.go:117] "RemoveContainer" containerID="fe1a8ada12fd81917eb3f115f0ed57838961e98daadca433ae78ae9036026fef" Feb 16 14:00:43 crc kubenswrapper[4812]: E0216 14:00:43.879534 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:00:54 crc kubenswrapper[4812]: E0216 14:00:54.887254 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:00:56 crc kubenswrapper[4812]: I0216 14:00:56.880054 4812 scope.go:117] "RemoveContainer" containerID="fe1a8ada12fd81917eb3f115f0ed57838961e98daadca433ae78ae9036026fef" Feb 16 14:00:56 crc kubenswrapper[4812]: E0216 14:00:56.880703 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:01:00 crc kubenswrapper[4812]: I0216 14:01:00.160741 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29520841-jq24z"] Feb 16 14:01:00 crc kubenswrapper[4812]: E0216 14:01:00.161783 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a" containerName="collect-profiles" Feb 16 14:01:00 crc kubenswrapper[4812]: I0216 14:01:00.161802 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a" containerName="collect-profiles" Feb 16 14:01:00 crc kubenswrapper[4812]: I0216 14:01:00.162034 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bd24ffb-3466-42fa-a4ee-4b2ad1f4178a" containerName="collect-profiles" Feb 16 14:01:00 crc kubenswrapper[4812]: I0216 14:01:00.162913 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29520841-jq24z" Feb 16 14:01:00 crc kubenswrapper[4812]: I0216 14:01:00.216101 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29520841-jq24z"] Feb 16 14:01:00 crc kubenswrapper[4812]: I0216 14:01:00.316708 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32a3c3bd-297d-49b8-a083-19f25cacf8c2-config-data\") pod \"keystone-cron-29520841-jq24z\" (UID: \"32a3c3bd-297d-49b8-a083-19f25cacf8c2\") " pod="openstack/keystone-cron-29520841-jq24z" Feb 16 14:01:00 crc kubenswrapper[4812]: I0216 14:01:00.316831 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32a3c3bd-297d-49b8-a083-19f25cacf8c2-combined-ca-bundle\") pod \"keystone-cron-29520841-jq24z\" (UID: \"32a3c3bd-297d-49b8-a083-19f25cacf8c2\") " pod="openstack/keystone-cron-29520841-jq24z" Feb 16 14:01:00 crc kubenswrapper[4812]: I0216 14:01:00.316975 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/32a3c3bd-297d-49b8-a083-19f25cacf8c2-fernet-keys\") pod \"keystone-cron-29520841-jq24z\" (UID: \"32a3c3bd-297d-49b8-a083-19f25cacf8c2\") " pod="openstack/keystone-cron-29520841-jq24z" Feb 16 14:01:00 crc kubenswrapper[4812]: I0216 14:01:00.317063 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng9x7\" (UniqueName: \"kubernetes.io/projected/32a3c3bd-297d-49b8-a083-19f25cacf8c2-kube-api-access-ng9x7\") pod \"keystone-cron-29520841-jq24z\" (UID: \"32a3c3bd-297d-49b8-a083-19f25cacf8c2\") " pod="openstack/keystone-cron-29520841-jq24z" Feb 16 14:01:00 crc kubenswrapper[4812]: I0216 14:01:00.420180 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/32a3c3bd-297d-49b8-a083-19f25cacf8c2-fernet-keys\") pod \"keystone-cron-29520841-jq24z\" (UID: \"32a3c3bd-297d-49b8-a083-19f25cacf8c2\") " pod="openstack/keystone-cron-29520841-jq24z" Feb 16 14:01:00 crc kubenswrapper[4812]: I0216 14:01:00.420278 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng9x7\" (UniqueName: \"kubernetes.io/projected/32a3c3bd-297d-49b8-a083-19f25cacf8c2-kube-api-access-ng9x7\") pod \"keystone-cron-29520841-jq24z\" (UID: \"32a3c3bd-297d-49b8-a083-19f25cacf8c2\") " pod="openstack/keystone-cron-29520841-jq24z" Feb 16 14:01:00 crc kubenswrapper[4812]: I0216 14:01:00.420401 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32a3c3bd-297d-49b8-a083-19f25cacf8c2-config-data\") pod \"keystone-cron-29520841-jq24z\" (UID: \"32a3c3bd-297d-49b8-a083-19f25cacf8c2\") " pod="openstack/keystone-cron-29520841-jq24z" Feb 16 14:01:00 crc kubenswrapper[4812]: I0216 14:01:00.420519 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32a3c3bd-297d-49b8-a083-19f25cacf8c2-combined-ca-bundle\") pod \"keystone-cron-29520841-jq24z\" (UID: \"32a3c3bd-297d-49b8-a083-19f25cacf8c2\") " pod="openstack/keystone-cron-29520841-jq24z" Feb 16 14:01:00 crc kubenswrapper[4812]: I0216 14:01:00.430773 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32a3c3bd-297d-49b8-a083-19f25cacf8c2-combined-ca-bundle\") pod \"keystone-cron-29520841-jq24z\" (UID: \"32a3c3bd-297d-49b8-a083-19f25cacf8c2\") " pod="openstack/keystone-cron-29520841-jq24z" Feb 16 14:01:00 crc kubenswrapper[4812]: I0216 14:01:00.431091 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/32a3c3bd-297d-49b8-a083-19f25cacf8c2-fernet-keys\") pod \"keystone-cron-29520841-jq24z\" (UID: \"32a3c3bd-297d-49b8-a083-19f25cacf8c2\") " pod="openstack/keystone-cron-29520841-jq24z" Feb 16 14:01:00 crc kubenswrapper[4812]: I0216 14:01:00.431099 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32a3c3bd-297d-49b8-a083-19f25cacf8c2-config-data\") pod \"keystone-cron-29520841-jq24z\" (UID: \"32a3c3bd-297d-49b8-a083-19f25cacf8c2\") " pod="openstack/keystone-cron-29520841-jq24z" Feb 16 14:01:00 crc kubenswrapper[4812]: I0216 14:01:00.441642 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng9x7\" (UniqueName: \"kubernetes.io/projected/32a3c3bd-297d-49b8-a083-19f25cacf8c2-kube-api-access-ng9x7\") pod \"keystone-cron-29520841-jq24z\" (UID: \"32a3c3bd-297d-49b8-a083-19f25cacf8c2\") " pod="openstack/keystone-cron-29520841-jq24z" Feb 16 14:01:00 crc kubenswrapper[4812]: I0216 14:01:00.486406 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29520841-jq24z" Feb 16 14:01:01 crc kubenswrapper[4812]: I0216 14:01:01.022238 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29520841-jq24z"] Feb 16 14:01:02 crc kubenswrapper[4812]: I0216 14:01:02.021848 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29520841-jq24z" event={"ID":"32a3c3bd-297d-49b8-a083-19f25cacf8c2","Type":"ContainerStarted","Data":"0339ce02cffc1ca76bd11788b041d58e4a31d81d6587ee2eea4a57ceaa300db0"} Feb 16 14:01:02 crc kubenswrapper[4812]: I0216 14:01:02.022225 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29520841-jq24z" event={"ID":"32a3c3bd-297d-49b8-a083-19f25cacf8c2","Type":"ContainerStarted","Data":"3d0cf41690f340f446bfe3611e378a1924aed845832081a04ab7af48651a8426"} Feb 16 14:01:02 crc kubenswrapper[4812]: I0216 14:01:02.052364 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29520841-jq24z" podStartSLOduration=2.052341455 podStartE2EDuration="2.052341455s" podCreationTimestamp="2026-02-16 14:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:01:02.049870575 +0000 UTC m=+1751.114201296" watchObservedRunningTime="2026-02-16 14:01:02.052341455 +0000 UTC m=+1751.116672156" Feb 16 14:01:05 crc kubenswrapper[4812]: I0216 14:01:05.157042 4812 generic.go:334] "Generic (PLEG): container finished" podID="32a3c3bd-297d-49b8-a083-19f25cacf8c2" containerID="0339ce02cffc1ca76bd11788b041d58e4a31d81d6587ee2eea4a57ceaa300db0" exitCode=0 Feb 16 14:01:05 crc kubenswrapper[4812]: I0216 14:01:05.157153 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29520841-jq24z" event={"ID":"32a3c3bd-297d-49b8-a083-19f25cacf8c2","Type":"ContainerDied","Data":"0339ce02cffc1ca76bd11788b041d58e4a31d81d6587ee2eea4a57ceaa300db0"} Feb 16 14:01:06 crc kubenswrapper[4812]: I0216 14:01:06.681222 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29520841-jq24z" Feb 16 14:01:06 crc kubenswrapper[4812]: I0216 14:01:06.690427 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ng9x7\" (UniqueName: \"kubernetes.io/projected/32a3c3bd-297d-49b8-a083-19f25cacf8c2-kube-api-access-ng9x7\") pod \"32a3c3bd-297d-49b8-a083-19f25cacf8c2\" (UID: \"32a3c3bd-297d-49b8-a083-19f25cacf8c2\") " Feb 16 14:01:06 crc kubenswrapper[4812]: I0216 14:01:06.690652 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/32a3c3bd-297d-49b8-a083-19f25cacf8c2-fernet-keys\") pod \"32a3c3bd-297d-49b8-a083-19f25cacf8c2\" (UID: \"32a3c3bd-297d-49b8-a083-19f25cacf8c2\") " Feb 16 14:01:06 crc kubenswrapper[4812]: I0216 14:01:06.690704 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32a3c3bd-297d-49b8-a083-19f25cacf8c2-config-data\") pod \"32a3c3bd-297d-49b8-a083-19f25cacf8c2\" (UID: \"32a3c3bd-297d-49b8-a083-19f25cacf8c2\") " Feb 16 14:01:06 crc kubenswrapper[4812]: I0216 14:01:06.690775 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32a3c3bd-297d-49b8-a083-19f25cacf8c2-combined-ca-bundle\") pod \"32a3c3bd-297d-49b8-a083-19f25cacf8c2\" (UID: \"32a3c3bd-297d-49b8-a083-19f25cacf8c2\") " Feb 16 14:01:06 crc kubenswrapper[4812]: I0216 14:01:06.705505 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32a3c3bd-297d-49b8-a083-19f25cacf8c2-kube-api-access-ng9x7" (OuterVolumeSpecName: "kube-api-access-ng9x7") pod "32a3c3bd-297d-49b8-a083-19f25cacf8c2" (UID: "32a3c3bd-297d-49b8-a083-19f25cacf8c2"). InnerVolumeSpecName "kube-api-access-ng9x7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:01:06 crc kubenswrapper[4812]: I0216 14:01:06.711623 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32a3c3bd-297d-49b8-a083-19f25cacf8c2-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "32a3c3bd-297d-49b8-a083-19f25cacf8c2" (UID: "32a3c3bd-297d-49b8-a083-19f25cacf8c2"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:01:06 crc kubenswrapper[4812]: I0216 14:01:06.743519 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32a3c3bd-297d-49b8-a083-19f25cacf8c2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32a3c3bd-297d-49b8-a083-19f25cacf8c2" (UID: "32a3c3bd-297d-49b8-a083-19f25cacf8c2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:01:06 crc kubenswrapper[4812]: I0216 14:01:06.768933 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32a3c3bd-297d-49b8-a083-19f25cacf8c2-config-data" (OuterVolumeSpecName: "config-data") pod "32a3c3bd-297d-49b8-a083-19f25cacf8c2" (UID: "32a3c3bd-297d-49b8-a083-19f25cacf8c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:01:06 crc kubenswrapper[4812]: I0216 14:01:06.794981 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32a3c3bd-297d-49b8-a083-19f25cacf8c2-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 14:01:06 crc kubenswrapper[4812]: I0216 14:01:06.795054 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32a3c3bd-297d-49b8-a083-19f25cacf8c2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 14:01:06 crc kubenswrapper[4812]: I0216 14:01:06.795077 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ng9x7\" (UniqueName: \"kubernetes.io/projected/32a3c3bd-297d-49b8-a083-19f25cacf8c2-kube-api-access-ng9x7\") on node \"crc\" DevicePath \"\"" Feb 16 14:01:06 crc kubenswrapper[4812]: I0216 14:01:06.795095 4812 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/32a3c3bd-297d-49b8-a083-19f25cacf8c2-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 14:01:07 crc kubenswrapper[4812]: I0216 14:01:07.195522 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29520841-jq24z" event={"ID":"32a3c3bd-297d-49b8-a083-19f25cacf8c2","Type":"ContainerDied","Data":"3d0cf41690f340f446bfe3611e378a1924aed845832081a04ab7af48651a8426"} Feb 16 14:01:07 crc kubenswrapper[4812]: I0216 14:01:07.195962 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d0cf41690f340f446bfe3611e378a1924aed845832081a04ab7af48651a8426" Feb 16 14:01:07 crc kubenswrapper[4812]: I0216 14:01:07.195559 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29520841-jq24z" Feb 16 14:01:07 crc kubenswrapper[4812]: I0216 14:01:07.831150 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hztpd"] Feb 16 14:01:07 crc kubenswrapper[4812]: E0216 14:01:07.832366 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32a3c3bd-297d-49b8-a083-19f25cacf8c2" containerName="keystone-cron" Feb 16 14:01:07 crc kubenswrapper[4812]: I0216 14:01:07.832403 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="32a3c3bd-297d-49b8-a083-19f25cacf8c2" containerName="keystone-cron" Feb 16 14:01:07 crc kubenswrapper[4812]: I0216 14:01:07.832635 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="32a3c3bd-297d-49b8-a083-19f25cacf8c2" containerName="keystone-cron" Feb 16 14:01:07 crc kubenswrapper[4812]: I0216 14:01:07.834766 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hztpd" Feb 16 14:01:07 crc kubenswrapper[4812]: I0216 14:01:07.844485 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hztpd"] Feb 16 14:01:07 crc kubenswrapper[4812]: I0216 14:01:07.879712 4812 scope.go:117] "RemoveContainer" containerID="fe1a8ada12fd81917eb3f115f0ed57838961e98daadca433ae78ae9036026fef" Feb 16 14:01:07 crc kubenswrapper[4812]: E0216 14:01:07.879991 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:01:07 crc kubenswrapper[4812]: I0216 14:01:07.924794 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0abe945a-2756-4c8e-afcc-d530cecc0f67-utilities\") pod \"certified-operators-hztpd\" (UID: \"0abe945a-2756-4c8e-afcc-d530cecc0f67\") " pod="openshift-marketplace/certified-operators-hztpd" Feb 16 14:01:07 crc kubenswrapper[4812]: I0216 14:01:07.925027 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0abe945a-2756-4c8e-afcc-d530cecc0f67-catalog-content\") pod \"certified-operators-hztpd\" (UID: \"0abe945a-2756-4c8e-afcc-d530cecc0f67\") " pod="openshift-marketplace/certified-operators-hztpd" Feb 16 14:01:07 crc kubenswrapper[4812]: I0216 14:01:07.926903 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwxhb\" (UniqueName: \"kubernetes.io/projected/0abe945a-2756-4c8e-afcc-d530cecc0f67-kube-api-access-dwxhb\") pod \"certified-operators-hztpd\" (UID: \"0abe945a-2756-4c8e-afcc-d530cecc0f67\") " pod="openshift-marketplace/certified-operators-hztpd" Feb 16 14:01:08 crc kubenswrapper[4812]: I0216 14:01:08.029059 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwxhb\" (UniqueName: \"kubernetes.io/projected/0abe945a-2756-4c8e-afcc-d530cecc0f67-kube-api-access-dwxhb\") pod \"certified-operators-hztpd\" (UID: \"0abe945a-2756-4c8e-afcc-d530cecc0f67\") " pod="openshift-marketplace/certified-operators-hztpd" Feb 16 14:01:08 crc kubenswrapper[4812]: I0216 14:01:08.029132 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0abe945a-2756-4c8e-afcc-d530cecc0f67-utilities\") pod \"certified-operators-hztpd\" (UID: \"0abe945a-2756-4c8e-afcc-d530cecc0f67\") " pod="openshift-marketplace/certified-operators-hztpd" Feb 16 14:01:08 crc kubenswrapper[4812]: I0216 14:01:08.029186 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0abe945a-2756-4c8e-afcc-d530cecc0f67-catalog-content\") pod \"certified-operators-hztpd\" (UID: \"0abe945a-2756-4c8e-afcc-d530cecc0f67\") " pod="openshift-marketplace/certified-operators-hztpd" Feb 16 14:01:08 crc kubenswrapper[4812]: I0216 14:01:08.030006 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0abe945a-2756-4c8e-afcc-d530cecc0f67-catalog-content\") pod \"certified-operators-hztpd\" (UID: \"0abe945a-2756-4c8e-afcc-d530cecc0f67\") " pod="openshift-marketplace/certified-operators-hztpd" Feb 16 14:01:08 crc kubenswrapper[4812]: I0216 14:01:08.031815 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0abe945a-2756-4c8e-afcc-d530cecc0f67-utilities\") pod \"certified-operators-hztpd\" (UID: \"0abe945a-2756-4c8e-afcc-d530cecc0f67\") " pod="openshift-marketplace/certified-operators-hztpd" Feb 16 14:01:08 crc kubenswrapper[4812]: I0216 14:01:08.056137 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwxhb\" (UniqueName: \"kubernetes.io/projected/0abe945a-2756-4c8e-afcc-d530cecc0f67-kube-api-access-dwxhb\") pod \"certified-operators-hztpd\" (UID: \"0abe945a-2756-4c8e-afcc-d530cecc0f67\") " pod="openshift-marketplace/certified-operators-hztpd" Feb 16 14:01:08 crc kubenswrapper[4812]: I0216 14:01:08.160477 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hztpd" Feb 16 14:01:08 crc kubenswrapper[4812]: I0216 14:01:08.752637 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hztpd"] Feb 16 14:01:09 crc kubenswrapper[4812]: I0216 14:01:09.232146 4812 generic.go:334] "Generic (PLEG): container finished" podID="0abe945a-2756-4c8e-afcc-d530cecc0f67" containerID="c2d3e81dc7217def6767202e010f5927650b8a365e91100c960fe8b5d4eaefa7" exitCode=0 Feb 16 14:01:09 crc kubenswrapper[4812]: I0216 14:01:09.232237 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hztpd" event={"ID":"0abe945a-2756-4c8e-afcc-d530cecc0f67","Type":"ContainerDied","Data":"c2d3e81dc7217def6767202e010f5927650b8a365e91100c960fe8b5d4eaefa7"} Feb 16 14:01:09 crc kubenswrapper[4812]: I0216 14:01:09.232500 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hztpd" event={"ID":"0abe945a-2756-4c8e-afcc-d530cecc0f67","Type":"ContainerStarted","Data":"4ce31148e9bf508ee03c3c463a2b611ce7c84ed54bd10363dcdcfa7d910c2ba0"} Feb 16 14:01:09 crc kubenswrapper[4812]: E0216 14:01:09.881287 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:01:10 crc kubenswrapper[4812]: I0216 14:01:10.259563 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hztpd" event={"ID":"0abe945a-2756-4c8e-afcc-d530cecc0f67","Type":"ContainerStarted","Data":"5d809b393a3a8cf2a2a32eaf9b09d3b3c1a41b78148ac3229a9a6698f9c2befc"} Feb 16 14:01:11 crc kubenswrapper[4812]: I0216 14:01:11.280206 4812 generic.go:334] "Generic (PLEG): container finished" podID="0abe945a-2756-4c8e-afcc-d530cecc0f67" containerID="5d809b393a3a8cf2a2a32eaf9b09d3b3c1a41b78148ac3229a9a6698f9c2befc" exitCode=0 Feb 16 14:01:11 crc kubenswrapper[4812]: I0216 14:01:11.280721 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hztpd" event={"ID":"0abe945a-2756-4c8e-afcc-d530cecc0f67","Type":"ContainerDied","Data":"5d809b393a3a8cf2a2a32eaf9b09d3b3c1a41b78148ac3229a9a6698f9c2befc"} Feb 16 14:01:12 crc kubenswrapper[4812]: I0216 14:01:12.295237 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hztpd" event={"ID":"0abe945a-2756-4c8e-afcc-d530cecc0f67","Type":"ContainerStarted","Data":"fdec6b5a63c8ea87eca6ceed479366dc8758426c631ec9abf00c7fb531f53d52"} Feb 16 14:01:12 crc kubenswrapper[4812]: I0216 14:01:12.322785 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hztpd" podStartSLOduration=2.863421604 podStartE2EDuration="5.322757873s" podCreationTimestamp="2026-02-16 14:01:07 +0000 UTC" firstStartedPulling="2026-02-16 14:01:09.234219753 +0000 UTC m=+1758.298550444" lastFinishedPulling="2026-02-16 14:01:11.693556012 +0000 UTC m=+1760.757886713" observedRunningTime="2026-02-16 14:01:12.314252361 +0000 UTC m=+1761.378583072" watchObservedRunningTime="2026-02-16 14:01:12.322757873 +0000 UTC m=+1761.387088574" Feb 16 14:01:18 crc kubenswrapper[4812]: I0216 14:01:18.160737 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hztpd" Feb 16 14:01:18 crc kubenswrapper[4812]: I0216 14:01:18.162314 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hztpd" Feb 16 14:01:18 crc kubenswrapper[4812]: I0216 14:01:18.219068 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hztpd" Feb 16 14:01:18 crc kubenswrapper[4812]: I0216 14:01:18.419833 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hztpd" Feb 16 14:01:21 crc kubenswrapper[4812]: I0216 14:01:21.818483 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hztpd"] Feb 16 14:01:21 crc kubenswrapper[4812]: I0216 14:01:21.821035 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hztpd" podUID="0abe945a-2756-4c8e-afcc-d530cecc0f67" containerName="registry-server" containerID="cri-o://fdec6b5a63c8ea87eca6ceed479366dc8758426c631ec9abf00c7fb531f53d52" gracePeriod=2 Feb 16 14:01:21 crc kubenswrapper[4812]: E0216 14:01:21.888985 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:01:22 crc kubenswrapper[4812]: I0216 14:01:22.356548 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hztpd" Feb 16 14:01:22 crc kubenswrapper[4812]: I0216 14:01:22.439879 4812 generic.go:334] "Generic (PLEG): container finished" podID="0abe945a-2756-4c8e-afcc-d530cecc0f67" containerID="fdec6b5a63c8ea87eca6ceed479366dc8758426c631ec9abf00c7fb531f53d52" exitCode=0 Feb 16 14:01:22 crc kubenswrapper[4812]: I0216 14:01:22.439945 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hztpd" event={"ID":"0abe945a-2756-4c8e-afcc-d530cecc0f67","Type":"ContainerDied","Data":"fdec6b5a63c8ea87eca6ceed479366dc8758426c631ec9abf00c7fb531f53d52"} Feb 16 14:01:22 crc kubenswrapper[4812]: I0216 14:01:22.439986 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hztpd" event={"ID":"0abe945a-2756-4c8e-afcc-d530cecc0f67","Type":"ContainerDied","Data":"4ce31148e9bf508ee03c3c463a2b611ce7c84ed54bd10363dcdcfa7d910c2ba0"} Feb 16 14:01:22 crc kubenswrapper[4812]: I0216 14:01:22.440015 4812 scope.go:117] "RemoveContainer" containerID="fdec6b5a63c8ea87eca6ceed479366dc8758426c631ec9abf00c7fb531f53d52" Feb 16 14:01:22 crc kubenswrapper[4812]: I0216 14:01:22.440222 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hztpd" Feb 16 14:01:22 crc kubenswrapper[4812]: I0216 14:01:22.443416 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwxhb\" (UniqueName: \"kubernetes.io/projected/0abe945a-2756-4c8e-afcc-d530cecc0f67-kube-api-access-dwxhb\") pod \"0abe945a-2756-4c8e-afcc-d530cecc0f67\" (UID: \"0abe945a-2756-4c8e-afcc-d530cecc0f67\") " Feb 16 14:01:22 crc kubenswrapper[4812]: I0216 14:01:22.444984 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0abe945a-2756-4c8e-afcc-d530cecc0f67-catalog-content\") pod \"0abe945a-2756-4c8e-afcc-d530cecc0f67\" (UID: \"0abe945a-2756-4c8e-afcc-d530cecc0f67\") " Feb 16 14:01:22 crc kubenswrapper[4812]: I0216 14:01:22.445035 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0abe945a-2756-4c8e-afcc-d530cecc0f67-utilities\") pod \"0abe945a-2756-4c8e-afcc-d530cecc0f67\" (UID: \"0abe945a-2756-4c8e-afcc-d530cecc0f67\") " Feb 16 14:01:22 crc kubenswrapper[4812]: I0216 14:01:22.446133 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0abe945a-2756-4c8e-afcc-d530cecc0f67-utilities" (OuterVolumeSpecName: "utilities") pod "0abe945a-2756-4c8e-afcc-d530cecc0f67" (UID: "0abe945a-2756-4c8e-afcc-d530cecc0f67"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:01:22 crc kubenswrapper[4812]: I0216 14:01:22.446395 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0abe945a-2756-4c8e-afcc-d530cecc0f67-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:01:22 crc kubenswrapper[4812]: I0216 14:01:22.452678 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0abe945a-2756-4c8e-afcc-d530cecc0f67-kube-api-access-dwxhb" (OuterVolumeSpecName: "kube-api-access-dwxhb") pod "0abe945a-2756-4c8e-afcc-d530cecc0f67" (UID: "0abe945a-2756-4c8e-afcc-d530cecc0f67"). InnerVolumeSpecName "kube-api-access-dwxhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:01:22 crc kubenswrapper[4812]: I0216 14:01:22.501183 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0abe945a-2756-4c8e-afcc-d530cecc0f67-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0abe945a-2756-4c8e-afcc-d530cecc0f67" (UID: "0abe945a-2756-4c8e-afcc-d530cecc0f67"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:01:22 crc kubenswrapper[4812]: I0216 14:01:22.541167 4812 scope.go:117] "RemoveContainer" containerID="5d809b393a3a8cf2a2a32eaf9b09d3b3c1a41b78148ac3229a9a6698f9c2befc" Feb 16 14:01:22 crc kubenswrapper[4812]: I0216 14:01:22.549260 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0abe945a-2756-4c8e-afcc-d530cecc0f67-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:01:22 crc kubenswrapper[4812]: I0216 14:01:22.549892 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwxhb\" (UniqueName: \"kubernetes.io/projected/0abe945a-2756-4c8e-afcc-d530cecc0f67-kube-api-access-dwxhb\") on node \"crc\" DevicePath \"\"" Feb 16 14:01:22 crc kubenswrapper[4812]: I0216 14:01:22.574657 4812 scope.go:117] "RemoveContainer" containerID="c2d3e81dc7217def6767202e010f5927650b8a365e91100c960fe8b5d4eaefa7" Feb 16 14:01:22 crc kubenswrapper[4812]: I0216 14:01:22.644004 4812 scope.go:117] "RemoveContainer" containerID="fdec6b5a63c8ea87eca6ceed479366dc8758426c631ec9abf00c7fb531f53d52" Feb 16 14:01:22 crc kubenswrapper[4812]: E0216 14:01:22.644957 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdec6b5a63c8ea87eca6ceed479366dc8758426c631ec9abf00c7fb531f53d52\": container with ID starting with fdec6b5a63c8ea87eca6ceed479366dc8758426c631ec9abf00c7fb531f53d52 not found: ID does not exist" containerID="fdec6b5a63c8ea87eca6ceed479366dc8758426c631ec9abf00c7fb531f53d52" Feb 16 14:01:22 crc kubenswrapper[4812]: I0216 14:01:22.645083 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdec6b5a63c8ea87eca6ceed479366dc8758426c631ec9abf00c7fb531f53d52"} err="failed to get container status \"fdec6b5a63c8ea87eca6ceed479366dc8758426c631ec9abf00c7fb531f53d52\": rpc error: code = NotFound desc = could not find container \"fdec6b5a63c8ea87eca6ceed479366dc8758426c631ec9abf00c7fb531f53d52\": container with ID starting with fdec6b5a63c8ea87eca6ceed479366dc8758426c631ec9abf00c7fb531f53d52 not found: ID does not exist" Feb 16 14:01:22 crc kubenswrapper[4812]: I0216 14:01:22.645173 4812 scope.go:117] "RemoveContainer" containerID="5d809b393a3a8cf2a2a32eaf9b09d3b3c1a41b78148ac3229a9a6698f9c2befc" Feb 16 14:01:22 crc kubenswrapper[4812]: E0216 14:01:22.645983 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d809b393a3a8cf2a2a32eaf9b09d3b3c1a41b78148ac3229a9a6698f9c2befc\": container with ID starting with 5d809b393a3a8cf2a2a32eaf9b09d3b3c1a41b78148ac3229a9a6698f9c2befc not found: ID does not exist" containerID="5d809b393a3a8cf2a2a32eaf9b09d3b3c1a41b78148ac3229a9a6698f9c2befc" Feb 16 14:01:22 crc kubenswrapper[4812]: I0216 14:01:22.646057 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d809b393a3a8cf2a2a32eaf9b09d3b3c1a41b78148ac3229a9a6698f9c2befc"} err="failed to get container status \"5d809b393a3a8cf2a2a32eaf9b09d3b3c1a41b78148ac3229a9a6698f9c2befc\": rpc error: code = NotFound desc = could not find container \"5d809b393a3a8cf2a2a32eaf9b09d3b3c1a41b78148ac3229a9a6698f9c2befc\": container with ID starting with 5d809b393a3a8cf2a2a32eaf9b09d3b3c1a41b78148ac3229a9a6698f9c2befc not found: ID does not exist" Feb 16 14:01:22 crc kubenswrapper[4812]: I0216 14:01:22.646107 4812 scope.go:117] "RemoveContainer" containerID="c2d3e81dc7217def6767202e010f5927650b8a365e91100c960fe8b5d4eaefa7" Feb 16 14:01:22 crc kubenswrapper[4812]: E0216 14:01:22.646645 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2d3e81dc7217def6767202e010f5927650b8a365e91100c960fe8b5d4eaefa7\": container with ID starting with c2d3e81dc7217def6767202e010f5927650b8a365e91100c960fe8b5d4eaefa7 not found: ID does not exist" containerID="c2d3e81dc7217def6767202e010f5927650b8a365e91100c960fe8b5d4eaefa7" Feb 16 14:01:22 crc kubenswrapper[4812]: I0216 14:01:22.646738 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2d3e81dc7217def6767202e010f5927650b8a365e91100c960fe8b5d4eaefa7"} err="failed to get container status \"c2d3e81dc7217def6767202e010f5927650b8a365e91100c960fe8b5d4eaefa7\": rpc error: code = NotFound desc = could not find container \"c2d3e81dc7217def6767202e010f5927650b8a365e91100c960fe8b5d4eaefa7\": container with ID starting with c2d3e81dc7217def6767202e010f5927650b8a365e91100c960fe8b5d4eaefa7 not found: ID does not exist" Feb 16 14:01:22 crc kubenswrapper[4812]: I0216 14:01:22.788490 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hztpd"] Feb 16 14:01:22 crc kubenswrapper[4812]: I0216 14:01:22.806379 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hztpd"] Feb 16 14:01:22 crc kubenswrapper[4812]: I0216 14:01:22.879649 4812 scope.go:117] "RemoveContainer" containerID="fe1a8ada12fd81917eb3f115f0ed57838961e98daadca433ae78ae9036026fef" Feb 16 14:01:22 crc kubenswrapper[4812]: E0216 14:01:22.880701 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:01:23 crc kubenswrapper[4812]: I0216 14:01:23.892376 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0abe945a-2756-4c8e-afcc-d530cecc0f67" path="/var/lib/kubelet/pods/0abe945a-2756-4c8e-afcc-d530cecc0f67/volumes" Feb 16 14:01:34 crc kubenswrapper[4812]: E0216 14:01:34.882395 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:01:36 crc kubenswrapper[4812]: I0216 14:01:36.880128 4812 scope.go:117] "RemoveContainer" containerID="fe1a8ada12fd81917eb3f115f0ed57838961e98daadca433ae78ae9036026fef" Feb 16 14:01:36 crc kubenswrapper[4812]: E0216 14:01:36.880807 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:01:46 crc kubenswrapper[4812]: E0216 14:01:46.882212 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:01:48 crc kubenswrapper[4812]: I0216 14:01:48.880074 4812 scope.go:117] "RemoveContainer" containerID="fe1a8ada12fd81917eb3f115f0ed57838961e98daadca433ae78ae9036026fef" Feb 16 14:01:48 crc kubenswrapper[4812]: E0216 14:01:48.880813 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:01:59 crc kubenswrapper[4812]: I0216 14:01:59.880540 4812 scope.go:117] "RemoveContainer" containerID="fe1a8ada12fd81917eb3f115f0ed57838961e98daadca433ae78ae9036026fef" Feb 16 14:01:59 crc kubenswrapper[4812]: E0216 14:01:59.881541 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:02:00 crc kubenswrapper[4812]: E0216 14:02:00.882087 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:02:12 crc kubenswrapper[4812]: I0216 14:02:12.879353 4812 scope.go:117] "RemoveContainer" containerID="fe1a8ada12fd81917eb3f115f0ed57838961e98daadca433ae78ae9036026fef" Feb 16 14:02:12 crc kubenswrapper[4812]: E0216 14:02:12.880121 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:02:14 crc kubenswrapper[4812]: E0216 14:02:14.883057 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:02:26 crc kubenswrapper[4812]: I0216 14:02:26.319371 4812 scope.go:117] "RemoveContainer" containerID="fe1a8ada12fd81917eb3f115f0ed57838961e98daadca433ae78ae9036026fef" Feb 16 14:02:26 crc kubenswrapper[4812]: E0216 14:02:26.320300 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:02:26 crc kubenswrapper[4812]: E0216 14:02:26.326425 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:02:39 crc kubenswrapper[4812]: E0216 14:02:39.882657 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:02:40 crc kubenswrapper[4812]: I0216 14:02:40.879434 4812 scope.go:117] "RemoveContainer" containerID="fe1a8ada12fd81917eb3f115f0ed57838961e98daadca433ae78ae9036026fef" Feb 16 14:02:40 crc kubenswrapper[4812]: E0216 14:02:40.879843 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:02:52 crc kubenswrapper[4812]: E0216 14:02:52.883418 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:02:55 crc kubenswrapper[4812]: I0216 14:02:55.880185 4812 scope.go:117] "RemoveContainer" containerID="fe1a8ada12fd81917eb3f115f0ed57838961e98daadca433ae78ae9036026fef" Feb 16 14:02:55 crc kubenswrapper[4812]: E0216 14:02:55.881065 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:03:03 crc kubenswrapper[4812]: E0216 14:03:03.881699 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:03:07 crc kubenswrapper[4812]: I0216 14:03:07.049839 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-v9pfn"] Feb 16 14:03:07 crc kubenswrapper[4812]: I0216 14:03:07.062733 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-474b-account-create-update-gsjf7"] Feb 16 14:03:07 crc kubenswrapper[4812]: I0216 14:03:07.075285 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-474b-account-create-update-gsjf7"] Feb 16 14:03:07 crc kubenswrapper[4812]: I0216 14:03:07.087129 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-v9pfn"] Feb 16 14:03:07 crc kubenswrapper[4812]: I0216 14:03:07.892250 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="239d953a-0da6-460c-8dce-99ff36a1015b" path="/var/lib/kubelet/pods/239d953a-0da6-460c-8dce-99ff36a1015b/volumes" Feb 16 14:03:07 crc kubenswrapper[4812]: I0216 14:03:07.893292 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3054c9c5-945c-43d4-a2c5-adcc6d116329" path="/var/lib/kubelet/pods/3054c9c5-945c-43d4-a2c5-adcc6d116329/volumes" Feb 16 14:03:08 crc kubenswrapper[4812]: I0216 14:03:08.880317 4812 scope.go:117] "RemoveContainer" containerID="fe1a8ada12fd81917eb3f115f0ed57838961e98daadca433ae78ae9036026fef" Feb 16 14:03:08 crc kubenswrapper[4812]: E0216 14:03:08.880957 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:03:12 crc kubenswrapper[4812]: I0216 14:03:12.043594 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-3a73-account-create-update-tljwz"] Feb 16 14:03:12 crc kubenswrapper[4812]: I0216 14:03:12.055066 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-p2qvq"] Feb 16 14:03:12 crc kubenswrapper[4812]: I0216 14:03:12.065807 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-112b-account-create-update-vwnq8"] Feb 16 14:03:12 crc kubenswrapper[4812]: I0216 14:03:12.077424 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-8t48r"] Feb 16 14:03:12 crc kubenswrapper[4812]: I0216 14:03:12.088618 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-p2qvq"] Feb 16 14:03:12 crc kubenswrapper[4812]: I0216 14:03:12.098726 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-8t48r"] Feb 16 14:03:12 crc kubenswrapper[4812]: I0216 14:03:12.108910 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-112b-account-create-update-vwnq8"] Feb 16 14:03:12 crc kubenswrapper[4812]: I0216 14:03:12.120574 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-3a73-account-create-update-tljwz"] Feb 16 14:03:13 crc kubenswrapper[4812]: I0216 14:03:13.893238 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bdeb7c5-c26e-4de3-ae86-8520e5baf9fb" path="/var/lib/kubelet/pods/0bdeb7c5-c26e-4de3-ae86-8520e5baf9fb/volumes" Feb 16 14:03:13 crc kubenswrapper[4812]: I0216 14:03:13.895032 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="509696b4-e17f-4d72-99d2-d2a800398fe6" path="/var/lib/kubelet/pods/509696b4-e17f-4d72-99d2-d2a800398fe6/volumes" Feb 16 14:03:13 crc kubenswrapper[4812]: I0216 14:03:13.895856 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9635671b-a1ee-4374-8487-c492616a699b" path="/var/lib/kubelet/pods/9635671b-a1ee-4374-8487-c492616a699b/volumes" Feb 16 14:03:13 crc kubenswrapper[4812]: I0216 14:03:13.896630 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec08c9a9-68e9-4615-9375-4511a84ea575" path="/var/lib/kubelet/pods/ec08c9a9-68e9-4615-9375-4511a84ea575/volumes" Feb 16 14:03:18 crc kubenswrapper[4812]: E0216 14:03:18.881102 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:03:19 crc kubenswrapper[4812]: I0216 14:03:19.058025 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-bqvq2"] Feb 16 14:03:19 crc kubenswrapper[4812]: I0216 14:03:19.091215 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-bqvq2"] Feb 16 14:03:19 crc kubenswrapper[4812]: I0216 14:03:19.893571 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5586976-e0b2-4971-9202-1804e20d413f" path="/var/lib/kubelet/pods/f5586976-e0b2-4971-9202-1804e20d413f/volumes" Feb 16 14:03:20 crc kubenswrapper[4812]: I0216 14:03:20.879571 4812 scope.go:117] "RemoveContainer" containerID="fe1a8ada12fd81917eb3f115f0ed57838961e98daadca433ae78ae9036026fef" Feb 16 14:03:21 crc kubenswrapper[4812]: I0216 14:03:21.120767 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" event={"ID":"3c55e49a-a30d-4950-a690-c33d9f8a31e0","Type":"ContainerStarted","Data":"2537a5668451bbc3263438cdeabe941020140f9d71754aa3ed0e0ff1820e5ccc"} Feb 16 14:03:30 crc kubenswrapper[4812]: E0216 14:03:30.883207 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:03:45 crc kubenswrapper[4812]: E0216 14:03:45.882786 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:03:46 crc kubenswrapper[4812]: I0216 14:03:46.065549 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-b95b-account-create-update-bhdws"] Feb 16 14:03:46 crc kubenswrapper[4812]: I0216 14:03:46.082402 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-eed8-account-create-update-zs2zn"] Feb 16 14:03:46 crc kubenswrapper[4812]: I0216 14:03:46.094908 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-eed8-account-create-update-zs2zn"] Feb 16 14:03:46 crc kubenswrapper[4812]: I0216 14:03:46.105424 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-b95b-account-create-update-bhdws"] Feb 16 14:03:47 crc kubenswrapper[4812]: I0216 14:03:47.045239 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-mjkrp"] Feb 16 14:03:47 crc kubenswrapper[4812]: I0216 14:03:47.056774 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-create-p8djg"] Feb 16 14:03:47 crc kubenswrapper[4812]: I0216 14:03:47.067284 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-466e-account-create-update-22bqw"] Feb 16 14:03:47 crc kubenswrapper[4812]: I0216 14:03:47.082120 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-cbl5p"] Feb 16 14:03:47 crc kubenswrapper[4812]: I0216 14:03:47.092891 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-mjkrp"] Feb 16 14:03:47 crc kubenswrapper[4812]: I0216 14:03:47.102237 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-466e-account-create-update-22bqw"] Feb 16 14:03:47 crc kubenswrapper[4812]: I0216 14:03:47.112498 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-08d4-account-create-update-846z4"] Feb 16 14:03:47 crc kubenswrapper[4812]: I0216 14:03:47.121979 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-create-p8djg"] Feb 16 14:03:47 crc kubenswrapper[4812]: I0216 14:03:47.131298 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-cbl5p"] Feb 16 14:03:47 crc kubenswrapper[4812]: I0216 14:03:47.141221 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-08d4-account-create-update-846z4"] Feb 16 14:03:47 crc kubenswrapper[4812]: I0216 14:03:47.150932 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-x869h"] Feb 16 14:03:47 crc kubenswrapper[4812]: I0216 14:03:47.161063 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-x869h"] Feb 16 14:03:47 crc kubenswrapper[4812]: I0216 14:03:47.896610 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3aee3570-b33b-4898-ad56-62202a1dd25b" path="/var/lib/kubelet/pods/3aee3570-b33b-4898-ad56-62202a1dd25b/volumes" Feb 16 14:03:47 crc kubenswrapper[4812]: I0216 14:03:47.897971 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4445f438-ce8c-4014-ad6f-b892beed381a" path="/var/lib/kubelet/pods/4445f438-ce8c-4014-ad6f-b892beed381a/volumes" Feb 16 14:03:47 crc kubenswrapper[4812]: I0216 14:03:47.898754 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49d92a9c-6e64-409e-a324-0061b9b451d0" path="/var/lib/kubelet/pods/49d92a9c-6e64-409e-a324-0061b9b451d0/volumes" Feb 16 14:03:47 crc kubenswrapper[4812]: I0216 14:03:47.900117 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="862482a0-fe2f-481c-a819-4539a198dc9d" path="/var/lib/kubelet/pods/862482a0-fe2f-481c-a819-4539a198dc9d/volumes" Feb 16 14:03:47 crc kubenswrapper[4812]: I0216 14:03:47.900839 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="870350e2-3c24-4788-afb1-8d5a4d77172e" path="/var/lib/kubelet/pods/870350e2-3c24-4788-afb1-8d5a4d77172e/volumes" Feb 16 14:03:47 crc kubenswrapper[4812]: I0216 14:03:47.901506 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a96961c7-e6f3-4cbc-8498-b9e5f023ad2d" path="/var/lib/kubelet/pods/a96961c7-e6f3-4cbc-8498-b9e5f023ad2d/volumes" Feb 16 14:03:47 crc kubenswrapper[4812]: I0216 14:03:47.902157 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5d00fb2-e93c-4b84-b307-f322137b1be4" path="/var/lib/kubelet/pods/c5d00fb2-e93c-4b84-b307-f322137b1be4/volumes" Feb 16 14:03:47 crc kubenswrapper[4812]: I0216 14:03:47.903644 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fde98520-6555-417b-851c-14dccde518ad" path="/var/lib/kubelet/pods/fde98520-6555-417b-851c-14dccde518ad/volumes" Feb 16 14:03:57 crc kubenswrapper[4812]: E0216 14:03:57.882668 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:03:58 crc kubenswrapper[4812]: I0216 14:03:58.035616 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-c7brg"] Feb 16 14:03:58 crc kubenswrapper[4812]: I0216 14:03:58.046588 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-c7brg"] Feb 16 14:03:59 crc kubenswrapper[4812]: I0216 14:03:59.893720 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="919aaed2-0230-4b07-aea8-fb57e6917cff" path="/var/lib/kubelet/pods/919aaed2-0230-4b07-aea8-fb57e6917cff/volumes" Feb 16 14:04:06 crc kubenswrapper[4812]: I0216 14:04:06.310571 4812 scope.go:117] "RemoveContainer" containerID="eb54dbfc6f57d2bf16293e83c97b308738834698c42fd8028cbc20cb07c6bd40" Feb 16 14:04:06 crc kubenswrapper[4812]: I0216 14:04:06.335936 4812 scope.go:117] "RemoveContainer" containerID="38146dde31abbe42429203525796659f2696f8ba6195bb373550d9ef46048bdd" Feb 16 14:04:06 crc kubenswrapper[4812]: I0216 14:04:06.367315 4812 scope.go:117] "RemoveContainer" containerID="bcdcba1c809c0bad5327178869ff05d9591b5020d8636ee0f44c09e18a3e9d03" Feb 16 14:04:06 crc kubenswrapper[4812]: I0216 14:04:06.556863 4812 scope.go:117] "RemoveContainer" containerID="5fea5389c5170fdba10b84a6e88b8a99cfa8c7b6bcc240ddfac70cd07febbf90" Feb 16 14:04:06 crc kubenswrapper[4812]: I0216 14:04:06.598976 4812 scope.go:117] "RemoveContainer" containerID="8e11f2007b770e95e73f5cf461bada711f11a105feac23ff151108f101f2a3fa" Feb 16 14:04:06 crc kubenswrapper[4812]: I0216 14:04:06.658706 4812 scope.go:117] "RemoveContainer" containerID="3bf82e30323b293ee35e9dc25e26e8fd94d821b15470c91b5afba006e95d7adc" Feb 16 14:04:06 crc kubenswrapper[4812]: I0216 14:04:06.703713 4812 scope.go:117] "RemoveContainer" containerID="d930b22adca630860f55999176ab9026e0f3be180338420116ccf15e1b1ba6af" Feb 16 14:04:06 crc kubenswrapper[4812]: I0216 14:04:06.756993 4812 scope.go:117] "RemoveContainer" containerID="3afe130fe636a1c04a8ed17bf9c1f9e55a35f252a5ca3114e48a9c3a17d779ca" Feb 16 14:04:06 crc kubenswrapper[4812]: I0216 14:04:06.783754 4812 scope.go:117] "RemoveContainer" containerID="41fe5f5b2186e0fbaa128acb0c5839bc16ef9fe777a37983f299d271181c1325" Feb 16 14:04:06 crc kubenswrapper[4812]: I0216 14:04:06.808373 4812 scope.go:117] "RemoveContainer" containerID="b24a084ba30bf4d0cccce5cd9061fe696362aadb7b4055cd188b6d410529a579" Feb 16 14:04:06 crc kubenswrapper[4812]: I0216 14:04:06.849065 4812 scope.go:117] "RemoveContainer" containerID="50371a462b9a7d6943988bc67f7d3c1d2fc29fcc3cecae39c2179648bf384e2a" Feb 16 14:04:06 crc kubenswrapper[4812]: I0216 14:04:06.875896 4812 scope.go:117] "RemoveContainer" containerID="7e76518be875978fc1307e56bb7011001a59c0a0a727e4aad11b3713a7b20fc1" Feb 16 14:04:06 crc kubenswrapper[4812]: I0216 14:04:06.904866 4812 scope.go:117] "RemoveContainer" containerID="07bbd2e2ce9e6f3368748dea83a509ea68554777c5c6f36e0304ca5d77e69d60" Feb 16 14:04:06 crc kubenswrapper[4812]: I0216 14:04:06.933791 4812 scope.go:117] "RemoveContainer" containerID="c9dae9e1e5ca837361102ca3bb5914434a73e943616c143e80467e4d838fcb65" Feb 16 14:04:06 crc kubenswrapper[4812]: I0216 14:04:06.964181 4812 scope.go:117] "RemoveContainer" containerID="c7ce3707d3dd8f34be7c672c96c7336f0058331114a96b125ebf5e4168bdb79b" Feb 16 14:04:07 crc kubenswrapper[4812]: I0216 14:04:07.004389 4812 scope.go:117] "RemoveContainer" containerID="a0e0fbf47d3f8d3903a120e88e251d8dc4bb641ae939a839d8b5ad9d120b6042" Feb 16 14:04:07 crc kubenswrapper[4812]: I0216 14:04:07.027692 4812 scope.go:117] "RemoveContainer" containerID="31b62e22433d87c016c20b14823ea928fcb93820167abd7a9030d9504f64e34e" Feb 16 14:04:08 crc kubenswrapper[4812]: E0216 14:04:08.881149 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:04:14 crc kubenswrapper[4812]: I0216 14:04:14.042261 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-mwzf9"] Feb 16 14:04:14 crc kubenswrapper[4812]: I0216 14:04:14.060826 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-mwzf9"] Feb 16 14:04:15 crc kubenswrapper[4812]: I0216 14:04:15.890294 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03e0b815-7641-435c-9934-05f5c5307962" path="/var/lib/kubelet/pods/03e0b815-7641-435c-9934-05f5c5307962/volumes" Feb 16 14:04:22 crc kubenswrapper[4812]: E0216 14:04:22.881752 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:04:37 crc kubenswrapper[4812]: E0216 14:04:37.882645 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:04:44 crc kubenswrapper[4812]: I0216 14:04:44.038191 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-6q6x6"] Feb 16 14:04:44 crc kubenswrapper[4812]: I0216 14:04:44.048847 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-6q6x6"] Feb 16 14:04:45 crc kubenswrapper[4812]: I0216 14:04:45.892739 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a35f33f0-33ff-4938-b15a-455a830ac631" path="/var/lib/kubelet/pods/a35f33f0-33ff-4938-b15a-455a830ac631/volumes" Feb 16 14:04:51 crc kubenswrapper[4812]: E0216 14:04:51.890348 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:05:02 crc kubenswrapper[4812]: I0216 14:05:02.882936 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 14:05:03 crc kubenswrapper[4812]: E0216 14:05:03.016086 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 14:05:03 crc kubenswrapper[4812]: E0216 14:05:03.016178 4812 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 14:05:03 crc kubenswrapper[4812]: E0216 14:05:03.016392 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mq54s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-krnzs_openstack(a7d4eae6-781f-4675-a6c3-ee0f1589c735): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 14:05:03 crc kubenswrapper[4812]: E0216 14:05:03.017594 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:05:05 crc kubenswrapper[4812]: I0216 14:05:05.055735 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-d76qk"] Feb 16 14:05:05 crc kubenswrapper[4812]: I0216 14:05:05.077519 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-hcgfc"] Feb 16 14:05:05 crc kubenswrapper[4812]: I0216 14:05:05.147507 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-hcgfc"] Feb 16 14:05:05 crc kubenswrapper[4812]: I0216 14:05:05.159968 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-d76qk"] Feb 16 14:05:05 crc kubenswrapper[4812]: I0216 14:05:05.905257 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b502458-ea63-4fa7-80b5-5812a46900f4" path="/var/lib/kubelet/pods/2b502458-ea63-4fa7-80b5-5812a46900f4/volumes" Feb 16 14:05:05 crc kubenswrapper[4812]: I0216 14:05:05.906812 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3e61e08-7ed1-43ed-a137-910b10e85e36" path="/var/lib/kubelet/pods/b3e61e08-7ed1-43ed-a137-910b10e85e36/volumes" Feb 16 14:05:07 crc kubenswrapper[4812]: I0216 14:05:07.353664 4812 scope.go:117] "RemoveContainer" containerID="915c0bff5b0f180289e5712e4550fbca30c9ec4d16c75d57902f289f8843fe63" Feb 16 14:05:07 crc kubenswrapper[4812]: I0216 14:05:07.435262 4812 scope.go:117] "RemoveContainer" containerID="8f5f581deb7240f85d1842eb1a42809ae5c341b80f7f652267f30ad19f9e2253" Feb 16 14:05:07 crc kubenswrapper[4812]: I0216 14:05:07.716549 4812 scope.go:117] "RemoveContainer" containerID="bde09c54d3755326e46294c9aa3086a0cacb04f9e964ac8d8b7cd14f37f0b309" Feb 16 14:05:07 crc kubenswrapper[4812]: I0216 14:05:07.769310 4812 scope.go:117] "RemoveContainer" containerID="4f93cb8c7224bf58c7d9140abffaf7b9a8aea79dd27a2b796acaaf74c8817355" Feb 16 14:05:14 crc kubenswrapper[4812]: E0216 14:05:14.881561 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:05:27 crc kubenswrapper[4812]: I0216 14:05:27.061632 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-p4bgr"] Feb 16 14:05:27 crc kubenswrapper[4812]: I0216 14:05:27.076599 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-p4bgr"] Feb 16 14:05:27 crc kubenswrapper[4812]: I0216 14:05:27.891874 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd76f722-eb61-4676-9456-9a9bb443ef16" path="/var/lib/kubelet/pods/dd76f722-eb61-4676-9456-9a9bb443ef16/volumes" Feb 16 14:05:28 crc kubenswrapper[4812]: I0216 14:05:28.029865 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-qj2kj"] Feb 16 14:05:28 crc kubenswrapper[4812]: I0216 14:05:28.042800 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-qj2kj"] Feb 16 14:05:29 crc kubenswrapper[4812]: E0216 14:05:29.882387 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:05:29 crc kubenswrapper[4812]: I0216 14:05:29.895127 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9d0140e-e353-40a3-8970-5007408f4cb8" path="/var/lib/kubelet/pods/d9d0140e-e353-40a3-8970-5007408f4cb8/volumes" Feb 16 14:05:42 crc kubenswrapper[4812]: E0216 14:05:42.881868 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:05:44 crc kubenswrapper[4812]: I0216 14:05:44.549151 4812 patch_prober.go:28] interesting pod/machine-config-daemon-c6mn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:05:44 crc kubenswrapper[4812]: I0216 14:05:44.549972 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:05:56 crc kubenswrapper[4812]: E0216 14:05:56.882018 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:06:07 crc kubenswrapper[4812]: I0216 14:06:07.952704 4812 scope.go:117] "RemoveContainer" containerID="672eba311a28c7448e9d6fe76a5309a2c3f2047236230c7bbc97c9cc32b8f3ec" Feb 16 14:06:08 crc kubenswrapper[4812]: I0216 14:06:08.007152 4812 scope.go:117] "RemoveContainer" containerID="8a9722f9ebba8ea6d76847ea76a8f1971a76074357b94ead45ba53cb9e0beca4" Feb 16 14:06:09 crc kubenswrapper[4812]: E0216 14:06:09.889797 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:06:14 crc kubenswrapper[4812]: I0216 14:06:14.557337 4812 patch_prober.go:28] interesting pod/machine-config-daemon-c6mn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:06:14 crc kubenswrapper[4812]: I0216 14:06:14.557981 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:06:21 crc kubenswrapper[4812]: E0216 14:06:21.892502 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:06:34 crc kubenswrapper[4812]: E0216 14:06:34.968749 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:06:39 crc kubenswrapper[4812]: I0216 14:06:39.088423 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-q6dqv"] Feb 16 14:06:39 crc kubenswrapper[4812]: I0216 14:06:39.101484 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-6dhdk"] Feb 16 14:06:39 crc kubenswrapper[4812]: I0216 14:06:39.111067 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-q6dqv"] Feb 16 14:06:39 crc kubenswrapper[4812]: I0216 14:06:39.124012 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-6dhdk"] Feb 16 14:06:39 crc kubenswrapper[4812]: I0216 14:06:39.893572 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1622167e-23ac-4689-8708-02bfe0050250" path="/var/lib/kubelet/pods/1622167e-23ac-4689-8708-02bfe0050250/volumes" Feb 16 14:06:39 crc kubenswrapper[4812]: I0216 14:06:39.894495 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99792a16-b3c8-4956-9f97-0c64ad3f97d3" path="/var/lib/kubelet/pods/99792a16-b3c8-4956-9f97-0c64ad3f97d3/volumes" Feb 16 14:06:40 crc kubenswrapper[4812]: I0216 14:06:40.039051 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-2f15-account-create-update-kzw5k"] Feb 16 14:06:40 crc kubenswrapper[4812]: I0216 14:06:40.052286 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-wwcfc"] Feb 16 14:06:40 crc kubenswrapper[4812]: I0216 14:06:40.064854 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-2f15-account-create-update-kzw5k"] Feb 16 14:06:40 crc kubenswrapper[4812]: I0216 14:06:40.087637 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-wwcfc"] Feb 16 14:06:40 crc kubenswrapper[4812]: I0216 14:06:40.109991 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-8835-account-create-update-tnlkp"] Feb 16 14:06:40 crc kubenswrapper[4812]: I0216 14:06:40.121812 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-8835-account-create-update-tnlkp"] Feb 16 14:06:41 crc kubenswrapper[4812]: I0216 14:06:41.034115 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-1437-account-create-update-xp5qz"] Feb 16 14:06:41 crc kubenswrapper[4812]: I0216 14:06:41.045598 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-1437-account-create-update-xp5qz"] Feb 16 14:06:41 crc kubenswrapper[4812]: I0216 14:06:41.892698 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="124644b5-886b-4bd1-af08-1ddc88e0ac9d" path="/var/lib/kubelet/pods/124644b5-886b-4bd1-af08-1ddc88e0ac9d/volumes" Feb 16 14:06:41 crc kubenswrapper[4812]: I0216 14:06:41.893403 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2196ced1-8ac4-4012-8791-b9487350bd38" path="/var/lib/kubelet/pods/2196ced1-8ac4-4012-8791-b9487350bd38/volumes" Feb 16 14:06:41 crc kubenswrapper[4812]: I0216 14:06:41.894102 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c3e6add-a453-46a2-b3ef-4c92d6c2426a" path="/var/lib/kubelet/pods/3c3e6add-a453-46a2-b3ef-4c92d6c2426a/volumes" Feb 16 14:06:41 crc kubenswrapper[4812]: I0216 14:06:41.894717 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78d112fe-cdc5-4d0e-8636-49878e3888d9" path="/var/lib/kubelet/pods/78d112fe-cdc5-4d0e-8636-49878e3888d9/volumes" Feb 16 14:06:44 crc kubenswrapper[4812]: I0216 14:06:44.548922 4812 patch_prober.go:28] interesting pod/machine-config-daemon-c6mn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:06:44 crc kubenswrapper[4812]: I0216 14:06:44.549397 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:06:44 crc kubenswrapper[4812]: I0216 14:06:44.549498 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" Feb 16 14:06:44 crc kubenswrapper[4812]: I0216 14:06:44.550674 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2537a5668451bbc3263438cdeabe941020140f9d71754aa3ed0e0ff1820e5ccc"} pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 14:06:44 crc kubenswrapper[4812]: I0216 14:06:44.550859 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" containerID="cri-o://2537a5668451bbc3263438cdeabe941020140f9d71754aa3ed0e0ff1820e5ccc" gracePeriod=600 Feb 16 14:06:45 crc kubenswrapper[4812]: I0216 14:06:45.652491 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" event={"ID":"3c55e49a-a30d-4950-a690-c33d9f8a31e0","Type":"ContainerDied","Data":"2537a5668451bbc3263438cdeabe941020140f9d71754aa3ed0e0ff1820e5ccc"} Feb 16 14:06:45 crc kubenswrapper[4812]: I0216 14:06:45.652554 4812 generic.go:334] "Generic (PLEG): container finished" podID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerID="2537a5668451bbc3263438cdeabe941020140f9d71754aa3ed0e0ff1820e5ccc" exitCode=0 Feb 16 14:06:45 crc kubenswrapper[4812]: I0216 14:06:45.653120 4812 scope.go:117] "RemoveContainer" containerID="fe1a8ada12fd81917eb3f115f0ed57838961e98daadca433ae78ae9036026fef" Feb 16 14:06:45 crc kubenswrapper[4812]: I0216 14:06:45.653152 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" event={"ID":"3c55e49a-a30d-4950-a690-c33d9f8a31e0","Type":"ContainerStarted","Data":"f3cacf320a72106f500d0d2c29eab121eed8c0bdd3f2aa2484da67d29caa2a70"} Feb 16 14:06:48 crc kubenswrapper[4812]: E0216 14:06:48.882715 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:06:58 crc kubenswrapper[4812]: I0216 14:06:58.925232 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-t2xkx"] Feb 16 14:06:58 crc kubenswrapper[4812]: E0216 14:06:58.927118 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0abe945a-2756-4c8e-afcc-d530cecc0f67" containerName="registry-server" Feb 16 14:06:58 crc kubenswrapper[4812]: I0216 14:06:58.927148 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="0abe945a-2756-4c8e-afcc-d530cecc0f67" containerName="registry-server" Feb 16 14:06:58 crc kubenswrapper[4812]: E0216 14:06:58.927191 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0abe945a-2756-4c8e-afcc-d530cecc0f67" containerName="extract-utilities" Feb 16 14:06:58 crc kubenswrapper[4812]: I0216 14:06:58.927201 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="0abe945a-2756-4c8e-afcc-d530cecc0f67" containerName="extract-utilities" Feb 16 14:06:58 crc kubenswrapper[4812]: E0216 14:06:58.927213 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0abe945a-2756-4c8e-afcc-d530cecc0f67" containerName="extract-content" Feb 16 14:06:58 crc kubenswrapper[4812]: I0216 14:06:58.927222 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="0abe945a-2756-4c8e-afcc-d530cecc0f67" containerName="extract-content" Feb 16 14:06:58 crc kubenswrapper[4812]: I0216 14:06:58.927558 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="0abe945a-2756-4c8e-afcc-d530cecc0f67" containerName="registry-server" Feb 16 14:06:58 crc kubenswrapper[4812]: I0216 14:06:58.931984 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t2xkx" Feb 16 14:06:58 crc kubenswrapper[4812]: I0216 14:06:58.959099 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t2xkx"] Feb 16 14:06:58 crc kubenswrapper[4812]: I0216 14:06:58.984881 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b14accaf-a9bb-4863-8ac6-159dcbb40006-catalog-content\") pod \"redhat-marketplace-t2xkx\" (UID: \"b14accaf-a9bb-4863-8ac6-159dcbb40006\") " pod="openshift-marketplace/redhat-marketplace-t2xkx" Feb 16 14:06:58 crc kubenswrapper[4812]: I0216 14:06:58.986199 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cblvh\" (UniqueName: \"kubernetes.io/projected/b14accaf-a9bb-4863-8ac6-159dcbb40006-kube-api-access-cblvh\") pod \"redhat-marketplace-t2xkx\" (UID: \"b14accaf-a9bb-4863-8ac6-159dcbb40006\") " pod="openshift-marketplace/redhat-marketplace-t2xkx" Feb 16 14:06:58 crc kubenswrapper[4812]: I0216 14:06:58.986338 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b14accaf-a9bb-4863-8ac6-159dcbb40006-utilities\") pod \"redhat-marketplace-t2xkx\" (UID: \"b14accaf-a9bb-4863-8ac6-159dcbb40006\") " pod="openshift-marketplace/redhat-marketplace-t2xkx" Feb 16 14:06:59 crc kubenswrapper[4812]: I0216 14:06:59.090846 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b14accaf-a9bb-4863-8ac6-159dcbb40006-catalog-content\") pod \"redhat-marketplace-t2xkx\" (UID: \"b14accaf-a9bb-4863-8ac6-159dcbb40006\") " pod="openshift-marketplace/redhat-marketplace-t2xkx" Feb 16 14:06:59 crc kubenswrapper[4812]: I0216 14:06:59.090930 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cblvh\" (UniqueName: \"kubernetes.io/projected/b14accaf-a9bb-4863-8ac6-159dcbb40006-kube-api-access-cblvh\") pod \"redhat-marketplace-t2xkx\" (UID: \"b14accaf-a9bb-4863-8ac6-159dcbb40006\") " pod="openshift-marketplace/redhat-marketplace-t2xkx" Feb 16 14:06:59 crc kubenswrapper[4812]: I0216 14:06:59.090973 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b14accaf-a9bb-4863-8ac6-159dcbb40006-utilities\") pod \"redhat-marketplace-t2xkx\" (UID: \"b14accaf-a9bb-4863-8ac6-159dcbb40006\") " pod="openshift-marketplace/redhat-marketplace-t2xkx" Feb 16 14:06:59 crc kubenswrapper[4812]: I0216 14:06:59.091884 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b14accaf-a9bb-4863-8ac6-159dcbb40006-utilities\") pod \"redhat-marketplace-t2xkx\" (UID: \"b14accaf-a9bb-4863-8ac6-159dcbb40006\") " pod="openshift-marketplace/redhat-marketplace-t2xkx" Feb 16 14:06:59 crc kubenswrapper[4812]: I0216 14:06:59.092010 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b14accaf-a9bb-4863-8ac6-159dcbb40006-catalog-content\") pod \"redhat-marketplace-t2xkx\" (UID: \"b14accaf-a9bb-4863-8ac6-159dcbb40006\") " pod="openshift-marketplace/redhat-marketplace-t2xkx" Feb 16 14:06:59 crc kubenswrapper[4812]: I0216 14:06:59.116972 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cblvh\" (UniqueName: \"kubernetes.io/projected/b14accaf-a9bb-4863-8ac6-159dcbb40006-kube-api-access-cblvh\") pod \"redhat-marketplace-t2xkx\" (UID: \"b14accaf-a9bb-4863-8ac6-159dcbb40006\") " pod="openshift-marketplace/redhat-marketplace-t2xkx" Feb 16 14:06:59 crc kubenswrapper[4812]: I0216 14:06:59.263394 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t2xkx" Feb 16 14:06:59 crc kubenswrapper[4812]: I0216 14:06:59.845511 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t2xkx"] Feb 16 14:07:00 crc kubenswrapper[4812]: I0216 14:07:00.815581 4812 generic.go:334] "Generic (PLEG): container finished" podID="b14accaf-a9bb-4863-8ac6-159dcbb40006" containerID="e1e3047085ffa0fe957424a122e729f7dfcd99f9ca609a0d0a04b44596456e05" exitCode=0 Feb 16 14:07:00 crc kubenswrapper[4812]: I0216 14:07:00.815691 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t2xkx" event={"ID":"b14accaf-a9bb-4863-8ac6-159dcbb40006","Type":"ContainerDied","Data":"e1e3047085ffa0fe957424a122e729f7dfcd99f9ca609a0d0a04b44596456e05"} Feb 16 14:07:00 crc kubenswrapper[4812]: I0216 14:07:00.816255 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t2xkx" event={"ID":"b14accaf-a9bb-4863-8ac6-159dcbb40006","Type":"ContainerStarted","Data":"b70d5784c24576ac7439c0f338b28a15553e1575c319251ba6ed02740c6a7a21"} Feb 16 14:07:01 crc kubenswrapper[4812]: I0216 14:07:01.828707 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t2xkx" event={"ID":"b14accaf-a9bb-4863-8ac6-159dcbb40006","Type":"ContainerStarted","Data":"433a3c201532d78b2f3f858f37fcb08ed3bbdb2b1e921d3a254cd52c8170f933"} Feb 16 14:07:01 crc kubenswrapper[4812]: E0216 14:07:01.887959 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:07:02 crc kubenswrapper[4812]: I0216 14:07:02.840051 4812 generic.go:334] "Generic (PLEG): container finished" podID="b14accaf-a9bb-4863-8ac6-159dcbb40006" containerID="433a3c201532d78b2f3f858f37fcb08ed3bbdb2b1e921d3a254cd52c8170f933" exitCode=0 Feb 16 14:07:02 crc kubenswrapper[4812]: I0216 14:07:02.840596 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t2xkx" event={"ID":"b14accaf-a9bb-4863-8ac6-159dcbb40006","Type":"ContainerDied","Data":"433a3c201532d78b2f3f858f37fcb08ed3bbdb2b1e921d3a254cd52c8170f933"} Feb 16 14:07:03 crc kubenswrapper[4812]: I0216 14:07:03.856577 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t2xkx" event={"ID":"b14accaf-a9bb-4863-8ac6-159dcbb40006","Type":"ContainerStarted","Data":"69b37e1d3d0e9c8a03e32f184e3d9216db6c14496ac9e3fe69ac6f600a7a2852"} Feb 16 14:07:03 crc kubenswrapper[4812]: I0216 14:07:03.877310 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-t2xkx" podStartSLOduration=3.464546763 podStartE2EDuration="5.877278615s" podCreationTimestamp="2026-02-16 14:06:58 +0000 UTC" firstStartedPulling="2026-02-16 14:07:00.819380577 +0000 UTC m=+2109.883711278" lastFinishedPulling="2026-02-16 14:07:03.232112429 +0000 UTC m=+2112.296443130" observedRunningTime="2026-02-16 14:07:03.872404167 +0000 UTC m=+2112.936734888" watchObservedRunningTime="2026-02-16 14:07:03.877278615 +0000 UTC m=+2112.941609316" Feb 16 14:07:08 crc kubenswrapper[4812]: I0216 14:07:08.114035 4812 scope.go:117] "RemoveContainer" containerID="28b7d53e73afee19fcb85699025053acf8ecca0824ae02179b102458d4dcf726" Feb 16 14:07:08 crc kubenswrapper[4812]: I0216 14:07:08.152514 4812 scope.go:117] "RemoveContainer" containerID="865a3cd7799f0720376a5a17a3b737384ca24fe5013f9c4df2333093bccc22b5" Feb 16 14:07:08 crc kubenswrapper[4812]: I0216 14:07:08.207422 4812 scope.go:117] "RemoveContainer" containerID="736ce3c115f45b44c8dd75eded7a1a2338b68279f8f3dd04af39b3dd25327e65" Feb 16 14:07:08 crc kubenswrapper[4812]: I0216 14:07:08.270945 4812 scope.go:117] "RemoveContainer" containerID="1e27c8856c5b857d056062f09160cbc3743ded64f797ed789869bcb56a775c50" Feb 16 14:07:08 crc kubenswrapper[4812]: I0216 14:07:08.336086 4812 scope.go:117] "RemoveContainer" containerID="74f83999946ea05dd0befcb351e0d879a5b07dc9a50d040340e2f6d02c535073" Feb 16 14:07:08 crc kubenswrapper[4812]: I0216 14:07:08.395096 4812 scope.go:117] "RemoveContainer" containerID="e1ede15a98b96250acf05fdeea17efa9b1b5467727999d3424899e30f915ff10" Feb 16 14:07:09 crc kubenswrapper[4812]: I0216 14:07:09.264529 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-t2xkx" Feb 16 14:07:09 crc kubenswrapper[4812]: I0216 14:07:09.264905 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-t2xkx" Feb 16 14:07:09 crc kubenswrapper[4812]: I0216 14:07:09.320892 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-t2xkx" Feb 16 14:07:09 crc kubenswrapper[4812]: I0216 14:07:09.981756 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-t2xkx" Feb 16 14:07:10 crc kubenswrapper[4812]: I0216 14:07:10.055501 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t2xkx"] Feb 16 14:07:11 crc kubenswrapper[4812]: I0216 14:07:11.950805 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-t2xkx" podUID="b14accaf-a9bb-4863-8ac6-159dcbb40006" containerName="registry-server" containerID="cri-o://69b37e1d3d0e9c8a03e32f184e3d9216db6c14496ac9e3fe69ac6f600a7a2852" gracePeriod=2 Feb 16 14:07:12 crc kubenswrapper[4812]: I0216 14:07:12.615570 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t2xkx" Feb 16 14:07:12 crc kubenswrapper[4812]: I0216 14:07:12.667975 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b14accaf-a9bb-4863-8ac6-159dcbb40006-catalog-content\") pod \"b14accaf-a9bb-4863-8ac6-159dcbb40006\" (UID: \"b14accaf-a9bb-4863-8ac6-159dcbb40006\") " Feb 16 14:07:12 crc kubenswrapper[4812]: I0216 14:07:12.668807 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b14accaf-a9bb-4863-8ac6-159dcbb40006-utilities\") pod \"b14accaf-a9bb-4863-8ac6-159dcbb40006\" (UID: \"b14accaf-a9bb-4863-8ac6-159dcbb40006\") " Feb 16 14:07:12 crc kubenswrapper[4812]: I0216 14:07:12.668976 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cblvh\" (UniqueName: \"kubernetes.io/projected/b14accaf-a9bb-4863-8ac6-159dcbb40006-kube-api-access-cblvh\") pod \"b14accaf-a9bb-4863-8ac6-159dcbb40006\" (UID: \"b14accaf-a9bb-4863-8ac6-159dcbb40006\") " Feb 16 14:07:12 crc kubenswrapper[4812]: I0216 14:07:12.670005 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b14accaf-a9bb-4863-8ac6-159dcbb40006-utilities" (OuterVolumeSpecName: "utilities") pod "b14accaf-a9bb-4863-8ac6-159dcbb40006" (UID: "b14accaf-a9bb-4863-8ac6-159dcbb40006"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:07:12 crc kubenswrapper[4812]: I0216 14:07:12.678149 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b14accaf-a9bb-4863-8ac6-159dcbb40006-kube-api-access-cblvh" (OuterVolumeSpecName: "kube-api-access-cblvh") pod "b14accaf-a9bb-4863-8ac6-159dcbb40006" (UID: "b14accaf-a9bb-4863-8ac6-159dcbb40006"). InnerVolumeSpecName "kube-api-access-cblvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:07:12 crc kubenswrapper[4812]: I0216 14:07:12.700669 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b14accaf-a9bb-4863-8ac6-159dcbb40006-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b14accaf-a9bb-4863-8ac6-159dcbb40006" (UID: "b14accaf-a9bb-4863-8ac6-159dcbb40006"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:07:12 crc kubenswrapper[4812]: I0216 14:07:12.771612 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b14accaf-a9bb-4863-8ac6-159dcbb40006-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:07:12 crc kubenswrapper[4812]: I0216 14:07:12.771670 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b14accaf-a9bb-4863-8ac6-159dcbb40006-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:07:12 crc kubenswrapper[4812]: I0216 14:07:12.771681 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cblvh\" (UniqueName: \"kubernetes.io/projected/b14accaf-a9bb-4863-8ac6-159dcbb40006-kube-api-access-cblvh\") on node \"crc\" DevicePath \"\"" Feb 16 14:07:12 crc kubenswrapper[4812]: I0216 14:07:12.966471 4812 generic.go:334] "Generic (PLEG): container finished" podID="b14accaf-a9bb-4863-8ac6-159dcbb40006" containerID="69b37e1d3d0e9c8a03e32f184e3d9216db6c14496ac9e3fe69ac6f600a7a2852" exitCode=0 Feb 16 14:07:12 crc kubenswrapper[4812]: I0216 14:07:12.966552 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t2xkx" Feb 16 14:07:12 crc kubenswrapper[4812]: I0216 14:07:12.966556 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t2xkx" event={"ID":"b14accaf-a9bb-4863-8ac6-159dcbb40006","Type":"ContainerDied","Data":"69b37e1d3d0e9c8a03e32f184e3d9216db6c14496ac9e3fe69ac6f600a7a2852"} Feb 16 14:07:12 crc kubenswrapper[4812]: I0216 14:07:12.966652 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t2xkx" event={"ID":"b14accaf-a9bb-4863-8ac6-159dcbb40006","Type":"ContainerDied","Data":"b70d5784c24576ac7439c0f338b28a15553e1575c319251ba6ed02740c6a7a21"} Feb 16 14:07:12 crc kubenswrapper[4812]: I0216 14:07:12.966693 4812 scope.go:117] "RemoveContainer" containerID="69b37e1d3d0e9c8a03e32f184e3d9216db6c14496ac9e3fe69ac6f600a7a2852" Feb 16 14:07:13 crc kubenswrapper[4812]: I0216 14:07:13.001072 4812 scope.go:117] "RemoveContainer" containerID="433a3c201532d78b2f3f858f37fcb08ed3bbdb2b1e921d3a254cd52c8170f933" Feb 16 14:07:13 crc kubenswrapper[4812]: I0216 14:07:13.012817 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t2xkx"] Feb 16 14:07:13 crc kubenswrapper[4812]: I0216 14:07:13.024081 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-t2xkx"] Feb 16 14:07:13 crc kubenswrapper[4812]: I0216 14:07:13.041505 4812 scope.go:117] "RemoveContainer" containerID="e1e3047085ffa0fe957424a122e729f7dfcd99f9ca609a0d0a04b44596456e05" Feb 16 14:07:13 crc kubenswrapper[4812]: I0216 14:07:13.058979 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5lk8n"] Feb 16 14:07:13 crc kubenswrapper[4812]: I0216 14:07:13.077533 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5lk8n"] Feb 16 14:07:13 crc kubenswrapper[4812]: I0216 14:07:13.094270 4812 scope.go:117] "RemoveContainer" containerID="69b37e1d3d0e9c8a03e32f184e3d9216db6c14496ac9e3fe69ac6f600a7a2852" Feb 16 14:07:13 crc kubenswrapper[4812]: E0216 14:07:13.095093 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69b37e1d3d0e9c8a03e32f184e3d9216db6c14496ac9e3fe69ac6f600a7a2852\": container with ID starting with 69b37e1d3d0e9c8a03e32f184e3d9216db6c14496ac9e3fe69ac6f600a7a2852 not found: ID does not exist" containerID="69b37e1d3d0e9c8a03e32f184e3d9216db6c14496ac9e3fe69ac6f600a7a2852" Feb 16 14:07:13 crc kubenswrapper[4812]: I0216 14:07:13.095232 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69b37e1d3d0e9c8a03e32f184e3d9216db6c14496ac9e3fe69ac6f600a7a2852"} err="failed to get container status \"69b37e1d3d0e9c8a03e32f184e3d9216db6c14496ac9e3fe69ac6f600a7a2852\": rpc error: code = NotFound desc = could not find container \"69b37e1d3d0e9c8a03e32f184e3d9216db6c14496ac9e3fe69ac6f600a7a2852\": container with ID starting with 69b37e1d3d0e9c8a03e32f184e3d9216db6c14496ac9e3fe69ac6f600a7a2852 not found: ID does not exist" Feb 16 14:07:13 crc kubenswrapper[4812]: I0216 14:07:13.095326 4812 scope.go:117] "RemoveContainer" containerID="433a3c201532d78b2f3f858f37fcb08ed3bbdb2b1e921d3a254cd52c8170f933" Feb 16 14:07:13 crc kubenswrapper[4812]: E0216 14:07:13.095772 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"433a3c201532d78b2f3f858f37fcb08ed3bbdb2b1e921d3a254cd52c8170f933\": container with ID starting with 433a3c201532d78b2f3f858f37fcb08ed3bbdb2b1e921d3a254cd52c8170f933 not found: ID does not exist" containerID="433a3c201532d78b2f3f858f37fcb08ed3bbdb2b1e921d3a254cd52c8170f933" Feb 16 14:07:13 crc kubenswrapper[4812]: I0216 14:07:13.095852 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"433a3c201532d78b2f3f858f37fcb08ed3bbdb2b1e921d3a254cd52c8170f933"} err="failed to get container status \"433a3c201532d78b2f3f858f37fcb08ed3bbdb2b1e921d3a254cd52c8170f933\": rpc error: code = NotFound desc = could not find container \"433a3c201532d78b2f3f858f37fcb08ed3bbdb2b1e921d3a254cd52c8170f933\": container with ID starting with 433a3c201532d78b2f3f858f37fcb08ed3bbdb2b1e921d3a254cd52c8170f933 not found: ID does not exist" Feb 16 14:07:13 crc kubenswrapper[4812]: I0216 14:07:13.095890 4812 scope.go:117] "RemoveContainer" containerID="e1e3047085ffa0fe957424a122e729f7dfcd99f9ca609a0d0a04b44596456e05" Feb 16 14:07:13 crc kubenswrapper[4812]: E0216 14:07:13.096156 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1e3047085ffa0fe957424a122e729f7dfcd99f9ca609a0d0a04b44596456e05\": container with ID starting with e1e3047085ffa0fe957424a122e729f7dfcd99f9ca609a0d0a04b44596456e05 not found: ID does not exist" containerID="e1e3047085ffa0fe957424a122e729f7dfcd99f9ca609a0d0a04b44596456e05" Feb 16 14:07:13 crc kubenswrapper[4812]: I0216 14:07:13.096238 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1e3047085ffa0fe957424a122e729f7dfcd99f9ca609a0d0a04b44596456e05"} err="failed to get container status \"e1e3047085ffa0fe957424a122e729f7dfcd99f9ca609a0d0a04b44596456e05\": rpc error: code = NotFound desc = could not find container \"e1e3047085ffa0fe957424a122e729f7dfcd99f9ca609a0d0a04b44596456e05\": container with ID starting with e1e3047085ffa0fe957424a122e729f7dfcd99f9ca609a0d0a04b44596456e05 not found: ID does not exist" Feb 16 14:07:13 crc kubenswrapper[4812]: I0216 14:07:13.899591 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c" path="/var/lib/kubelet/pods/7b9a1ea5-9cb2-4d3e-90fc-fb06c5e3304c/volumes" Feb 16 14:07:13 crc kubenswrapper[4812]: I0216 14:07:13.901436 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b14accaf-a9bb-4863-8ac6-159dcbb40006" path="/var/lib/kubelet/pods/b14accaf-a9bb-4863-8ac6-159dcbb40006/volumes" Feb 16 14:07:14 crc kubenswrapper[4812]: E0216 14:07:14.882817 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:07:25 crc kubenswrapper[4812]: E0216 14:07:25.885726 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:07:37 crc kubenswrapper[4812]: E0216 14:07:37.882281 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:07:40 crc kubenswrapper[4812]: I0216 14:07:40.047135 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-46bzm"] Feb 16 14:07:40 crc kubenswrapper[4812]: I0216 14:07:40.061501 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-46bzm"] Feb 16 14:07:41 crc kubenswrapper[4812]: I0216 14:07:41.893464 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f356bcf-8719-4c4d-a9f8-b21489380dd8" path="/var/lib/kubelet/pods/9f356bcf-8719-4c4d-a9f8-b21489380dd8/volumes" Feb 16 14:07:43 crc kubenswrapper[4812]: I0216 14:07:43.034575 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4snbn"] Feb 16 14:07:43 crc kubenswrapper[4812]: I0216 14:07:43.046441 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4snbn"] Feb 16 14:07:43 crc kubenswrapper[4812]: I0216 14:07:43.891645 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd" path="/var/lib/kubelet/pods/3fd87c54-c4b6-4aaf-9c67-31b1bf2e43bd/volumes" Feb 16 14:07:51 crc kubenswrapper[4812]: E0216 14:07:51.892216 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:08:04 crc kubenswrapper[4812]: E0216 14:08:04.880907 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:08:08 crc kubenswrapper[4812]: I0216 14:08:08.584347 4812 scope.go:117] "RemoveContainer" containerID="66dba176812ac361fc65ee2a48bea40acb3b46a5825e514c2c7c4d21aea33468" Feb 16 14:08:08 crc kubenswrapper[4812]: I0216 14:08:08.638364 4812 scope.go:117] "RemoveContainer" containerID="dd8a762aa4a7f6dcf51ecd0d2a09f6a31fcbeb7037cb9a6c477d3fc18f074a98" Feb 16 14:08:08 crc kubenswrapper[4812]: I0216 14:08:08.688642 4812 scope.go:117] "RemoveContainer" containerID="828887c156eb0c0a116591a211bcfb060d33115558755f9c952c246f28a2e6c3" Feb 16 14:08:16 crc kubenswrapper[4812]: E0216 14:08:16.882141 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:08:26 crc kubenswrapper[4812]: I0216 14:08:26.056197 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-p7tcs"] Feb 16 14:08:26 crc kubenswrapper[4812]: I0216 14:08:26.067992 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-p7tcs"] Feb 16 14:08:27 crc kubenswrapper[4812]: I0216 14:08:27.894733 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c77cba8e-f37e-4a5f-a795-13999695c004" path="/var/lib/kubelet/pods/c77cba8e-f37e-4a5f-a795-13999695c004/volumes" Feb 16 14:08:29 crc kubenswrapper[4812]: E0216 14:08:29.881902 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:08:42 crc kubenswrapper[4812]: E0216 14:08:42.882539 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:08:43 crc kubenswrapper[4812]: I0216 14:08:43.308028 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fdbd7"] Feb 16 14:08:43 crc kubenswrapper[4812]: E0216 14:08:43.308892 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b14accaf-a9bb-4863-8ac6-159dcbb40006" containerName="extract-utilities" Feb 16 14:08:43 crc kubenswrapper[4812]: I0216 14:08:43.308939 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b14accaf-a9bb-4863-8ac6-159dcbb40006" containerName="extract-utilities" Feb 16 14:08:43 crc kubenswrapper[4812]: E0216 14:08:43.308989 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b14accaf-a9bb-4863-8ac6-159dcbb40006" containerName="extract-content" Feb 16 14:08:43 crc kubenswrapper[4812]: I0216 14:08:43.309000 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b14accaf-a9bb-4863-8ac6-159dcbb40006" containerName="extract-content" Feb 16 14:08:43 crc kubenswrapper[4812]: E0216 14:08:43.309014 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b14accaf-a9bb-4863-8ac6-159dcbb40006" containerName="registry-server" Feb 16 14:08:43 crc kubenswrapper[4812]: I0216 14:08:43.309023 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b14accaf-a9bb-4863-8ac6-159dcbb40006" containerName="registry-server" Feb 16 14:08:43 crc kubenswrapper[4812]: I0216 14:08:43.309416 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="b14accaf-a9bb-4863-8ac6-159dcbb40006" containerName="registry-server" Feb 16 14:08:43 crc kubenswrapper[4812]: I0216 14:08:43.312017 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fdbd7" Feb 16 14:08:43 crc kubenswrapper[4812]: I0216 14:08:43.324692 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fdbd7"] Feb 16 14:08:43 crc kubenswrapper[4812]: I0216 14:08:43.331238 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca2acad8-cf0e-4e0e-baac-3c114869369b-catalog-content\") pod \"community-operators-fdbd7\" (UID: \"ca2acad8-cf0e-4e0e-baac-3c114869369b\") " pod="openshift-marketplace/community-operators-fdbd7" Feb 16 14:08:43 crc kubenswrapper[4812]: I0216 14:08:43.331457 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca2acad8-cf0e-4e0e-baac-3c114869369b-utilities\") pod \"community-operators-fdbd7\" (UID: \"ca2acad8-cf0e-4e0e-baac-3c114869369b\") " pod="openshift-marketplace/community-operators-fdbd7" Feb 16 14:08:43 crc kubenswrapper[4812]: I0216 14:08:43.332044 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bgd6\" (UniqueName: \"kubernetes.io/projected/ca2acad8-cf0e-4e0e-baac-3c114869369b-kube-api-access-6bgd6\") pod \"community-operators-fdbd7\" (UID: \"ca2acad8-cf0e-4e0e-baac-3c114869369b\") " pod="openshift-marketplace/community-operators-fdbd7" Feb 16 14:08:43 crc kubenswrapper[4812]: I0216 14:08:43.436029 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca2acad8-cf0e-4e0e-baac-3c114869369b-catalog-content\") pod \"community-operators-fdbd7\" (UID: \"ca2acad8-cf0e-4e0e-baac-3c114869369b\") " pod="openshift-marketplace/community-operators-fdbd7" Feb 16 14:08:43 crc kubenswrapper[4812]: I0216 14:08:43.436129 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca2acad8-cf0e-4e0e-baac-3c114869369b-utilities\") pod \"community-operators-fdbd7\" (UID: \"ca2acad8-cf0e-4e0e-baac-3c114869369b\") " pod="openshift-marketplace/community-operators-fdbd7" Feb 16 14:08:43 crc kubenswrapper[4812]: I0216 14:08:43.436248 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bgd6\" (UniqueName: \"kubernetes.io/projected/ca2acad8-cf0e-4e0e-baac-3c114869369b-kube-api-access-6bgd6\") pod \"community-operators-fdbd7\" (UID: \"ca2acad8-cf0e-4e0e-baac-3c114869369b\") " pod="openshift-marketplace/community-operators-fdbd7" Feb 16 14:08:43 crc kubenswrapper[4812]: I0216 14:08:43.437159 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca2acad8-cf0e-4e0e-baac-3c114869369b-catalog-content\") pod \"community-operators-fdbd7\" (UID: \"ca2acad8-cf0e-4e0e-baac-3c114869369b\") " pod="openshift-marketplace/community-operators-fdbd7" Feb 16 14:08:43 crc kubenswrapper[4812]: I0216 14:08:43.438000 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca2acad8-cf0e-4e0e-baac-3c114869369b-utilities\") pod \"community-operators-fdbd7\" (UID: \"ca2acad8-cf0e-4e0e-baac-3c114869369b\") " pod="openshift-marketplace/community-operators-fdbd7" Feb 16 14:08:43 crc kubenswrapper[4812]: I0216 14:08:43.461654 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bgd6\" (UniqueName: \"kubernetes.io/projected/ca2acad8-cf0e-4e0e-baac-3c114869369b-kube-api-access-6bgd6\") pod \"community-operators-fdbd7\" (UID: \"ca2acad8-cf0e-4e0e-baac-3c114869369b\") " pod="openshift-marketplace/community-operators-fdbd7" Feb 16 14:08:43 crc kubenswrapper[4812]: I0216 14:08:43.637831 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fdbd7" Feb 16 14:08:44 crc kubenswrapper[4812]: I0216 14:08:44.237412 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fdbd7"] Feb 16 14:08:44 crc kubenswrapper[4812]: I0216 14:08:44.317758 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fdbd7" event={"ID":"ca2acad8-cf0e-4e0e-baac-3c114869369b","Type":"ContainerStarted","Data":"1307c6ba1284b63f806117a0cb3fe19d795ad0ab393d87356a599ffe824928aa"} Feb 16 14:08:44 crc kubenswrapper[4812]: I0216 14:08:44.548691 4812 patch_prober.go:28] interesting pod/machine-config-daemon-c6mn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:08:44 crc kubenswrapper[4812]: I0216 14:08:44.549128 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:08:45 crc kubenswrapper[4812]: I0216 14:08:45.331156 4812 generic.go:334] "Generic (PLEG): container finished" podID="ca2acad8-cf0e-4e0e-baac-3c114869369b" containerID="660c1578bb7ecafac3be30c8de322869e5464919bad87f13086678a9a5418865" exitCode=0 Feb 16 14:08:45 crc kubenswrapper[4812]: I0216 14:08:45.331244 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fdbd7" event={"ID":"ca2acad8-cf0e-4e0e-baac-3c114869369b","Type":"ContainerDied","Data":"660c1578bb7ecafac3be30c8de322869e5464919bad87f13086678a9a5418865"} Feb 16 14:08:47 crc kubenswrapper[4812]: I0216 14:08:47.284071 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-778459db5b-d66gm" podUID="3cdb1565-bb99-4e18-9089-7a2112685704" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 14:08:47 crc kubenswrapper[4812]: I0216 14:08:47.284433 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-778459db5b-d66gm" podUID="3cdb1565-bb99-4e18-9089-7a2112685704" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 14:08:47 crc kubenswrapper[4812]: I0216 14:08:47.513861 4812 generic.go:334] "Generic (PLEG): container finished" podID="ca2acad8-cf0e-4e0e-baac-3c114869369b" containerID="8f0ca4e08f10fe2d769134ab2683868a688f13f358263426d5d244fe925f4526" exitCode=0 Feb 16 14:08:47 crc kubenswrapper[4812]: I0216 14:08:47.513947 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fdbd7" event={"ID":"ca2acad8-cf0e-4e0e-baac-3c114869369b","Type":"ContainerDied","Data":"8f0ca4e08f10fe2d769134ab2683868a688f13f358263426d5d244fe925f4526"} Feb 16 14:08:48 crc kubenswrapper[4812]: I0216 14:08:48.526885 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fdbd7" event={"ID":"ca2acad8-cf0e-4e0e-baac-3c114869369b","Type":"ContainerStarted","Data":"aeb5851ddc50316ed397863bb76317ba26ce39fda46a38e6bb8f4be121e175bd"} Feb 16 14:08:48 crc kubenswrapper[4812]: I0216 14:08:48.553011 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fdbd7" podStartSLOduration=2.981962325 podStartE2EDuration="5.552936618s" podCreationTimestamp="2026-02-16 14:08:43 +0000 UTC" firstStartedPulling="2026-02-16 14:08:45.336225398 +0000 UTC m=+2214.400556099" lastFinishedPulling="2026-02-16 14:08:47.907199691 +0000 UTC m=+2216.971530392" observedRunningTime="2026-02-16 14:08:48.550286403 +0000 UTC m=+2217.614617104" watchObservedRunningTime="2026-02-16 14:08:48.552936618 +0000 UTC m=+2217.617267309" Feb 16 14:08:53 crc kubenswrapper[4812]: I0216 14:08:53.638567 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fdbd7" Feb 16 14:08:53 crc kubenswrapper[4812]: I0216 14:08:53.639254 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fdbd7" Feb 16 14:08:53 crc kubenswrapper[4812]: I0216 14:08:53.690680 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fdbd7" Feb 16 14:08:53 crc kubenswrapper[4812]: E0216 14:08:53.882044 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:08:54 crc kubenswrapper[4812]: I0216 14:08:54.645990 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fdbd7" Feb 16 14:08:54 crc kubenswrapper[4812]: I0216 14:08:54.699545 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fdbd7"] Feb 16 14:08:56 crc kubenswrapper[4812]: I0216 14:08:56.615892 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fdbd7" podUID="ca2acad8-cf0e-4e0e-baac-3c114869369b" containerName="registry-server" containerID="cri-o://aeb5851ddc50316ed397863bb76317ba26ce39fda46a38e6bb8f4be121e175bd" gracePeriod=2 Feb 16 14:08:57 crc kubenswrapper[4812]: I0216 14:08:57.168976 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fdbd7" Feb 16 14:08:57 crc kubenswrapper[4812]: I0216 14:08:57.208561 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bgd6\" (UniqueName: \"kubernetes.io/projected/ca2acad8-cf0e-4e0e-baac-3c114869369b-kube-api-access-6bgd6\") pod \"ca2acad8-cf0e-4e0e-baac-3c114869369b\" (UID: \"ca2acad8-cf0e-4e0e-baac-3c114869369b\") " Feb 16 14:08:57 crc kubenswrapper[4812]: I0216 14:08:57.208678 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca2acad8-cf0e-4e0e-baac-3c114869369b-utilities\") pod \"ca2acad8-cf0e-4e0e-baac-3c114869369b\" (UID: \"ca2acad8-cf0e-4e0e-baac-3c114869369b\") " Feb 16 14:08:57 crc kubenswrapper[4812]: I0216 14:08:57.208786 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca2acad8-cf0e-4e0e-baac-3c114869369b-catalog-content\") pod \"ca2acad8-cf0e-4e0e-baac-3c114869369b\" (UID: \"ca2acad8-cf0e-4e0e-baac-3c114869369b\") " Feb 16 14:08:57 crc kubenswrapper[4812]: I0216 14:08:57.209701 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca2acad8-cf0e-4e0e-baac-3c114869369b-utilities" (OuterVolumeSpecName: "utilities") pod "ca2acad8-cf0e-4e0e-baac-3c114869369b" (UID: "ca2acad8-cf0e-4e0e-baac-3c114869369b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:08:57 crc kubenswrapper[4812]: I0216 14:08:57.312781 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca2acad8-cf0e-4e0e-baac-3c114869369b-kube-api-access-6bgd6" (OuterVolumeSpecName: "kube-api-access-6bgd6") pod "ca2acad8-cf0e-4e0e-baac-3c114869369b" (UID: "ca2acad8-cf0e-4e0e-baac-3c114869369b"). InnerVolumeSpecName "kube-api-access-6bgd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:08:57 crc kubenswrapper[4812]: I0216 14:08:57.314533 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bgd6\" (UniqueName: \"kubernetes.io/projected/ca2acad8-cf0e-4e0e-baac-3c114869369b-kube-api-access-6bgd6\") on node \"crc\" DevicePath \"\"" Feb 16 14:08:57 crc kubenswrapper[4812]: I0216 14:08:57.314614 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca2acad8-cf0e-4e0e-baac-3c114869369b-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:08:58 crc kubenswrapper[4812]: I0216 14:08:58.046605 4812 generic.go:334] "Generic (PLEG): container finished" podID="ca2acad8-cf0e-4e0e-baac-3c114869369b" containerID="aeb5851ddc50316ed397863bb76317ba26ce39fda46a38e6bb8f4be121e175bd" exitCode=0 Feb 16 14:08:58 crc kubenswrapper[4812]: I0216 14:08:58.048138 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fdbd7" event={"ID":"ca2acad8-cf0e-4e0e-baac-3c114869369b","Type":"ContainerDied","Data":"aeb5851ddc50316ed397863bb76317ba26ce39fda46a38e6bb8f4be121e175bd"} Feb 16 14:08:58 crc kubenswrapper[4812]: I0216 14:08:58.048253 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fdbd7" event={"ID":"ca2acad8-cf0e-4e0e-baac-3c114869369b","Type":"ContainerDied","Data":"1307c6ba1284b63f806117a0cb3fe19d795ad0ab393d87356a599ffe824928aa"} Feb 16 14:08:58 crc kubenswrapper[4812]: I0216 14:08:58.048343 4812 scope.go:117] "RemoveContainer" containerID="aeb5851ddc50316ed397863bb76317ba26ce39fda46a38e6bb8f4be121e175bd" Feb 16 14:08:58 crc kubenswrapper[4812]: I0216 14:08:58.048645 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fdbd7" Feb 16 14:08:58 crc kubenswrapper[4812]: I0216 14:08:58.092427 4812 scope.go:117] "RemoveContainer" containerID="8f0ca4e08f10fe2d769134ab2683868a688f13f358263426d5d244fe925f4526" Feb 16 14:08:58 crc kubenswrapper[4812]: I0216 14:08:58.106584 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca2acad8-cf0e-4e0e-baac-3c114869369b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca2acad8-cf0e-4e0e-baac-3c114869369b" (UID: "ca2acad8-cf0e-4e0e-baac-3c114869369b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:08:58 crc kubenswrapper[4812]: I0216 14:08:58.120950 4812 scope.go:117] "RemoveContainer" containerID="660c1578bb7ecafac3be30c8de322869e5464919bad87f13086678a9a5418865" Feb 16 14:08:58 crc kubenswrapper[4812]: I0216 14:08:58.186933 4812 scope.go:117] "RemoveContainer" containerID="aeb5851ddc50316ed397863bb76317ba26ce39fda46a38e6bb8f4be121e175bd" Feb 16 14:08:58 crc kubenswrapper[4812]: E0216 14:08:58.187797 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aeb5851ddc50316ed397863bb76317ba26ce39fda46a38e6bb8f4be121e175bd\": container with ID starting with aeb5851ddc50316ed397863bb76317ba26ce39fda46a38e6bb8f4be121e175bd not found: ID does not exist" containerID="aeb5851ddc50316ed397863bb76317ba26ce39fda46a38e6bb8f4be121e175bd" Feb 16 14:08:58 crc kubenswrapper[4812]: I0216 14:08:58.187873 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aeb5851ddc50316ed397863bb76317ba26ce39fda46a38e6bb8f4be121e175bd"} err="failed to get container status \"aeb5851ddc50316ed397863bb76317ba26ce39fda46a38e6bb8f4be121e175bd\": rpc error: code = NotFound desc = could not find container \"aeb5851ddc50316ed397863bb76317ba26ce39fda46a38e6bb8f4be121e175bd\": container with ID starting with aeb5851ddc50316ed397863bb76317ba26ce39fda46a38e6bb8f4be121e175bd not found: ID does not exist" Feb 16 14:08:58 crc kubenswrapper[4812]: I0216 14:08:58.187921 4812 scope.go:117] "RemoveContainer" containerID="8f0ca4e08f10fe2d769134ab2683868a688f13f358263426d5d244fe925f4526" Feb 16 14:08:58 crc kubenswrapper[4812]: E0216 14:08:58.188634 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f0ca4e08f10fe2d769134ab2683868a688f13f358263426d5d244fe925f4526\": container with ID starting with 8f0ca4e08f10fe2d769134ab2683868a688f13f358263426d5d244fe925f4526 not found: ID does not exist" containerID="8f0ca4e08f10fe2d769134ab2683868a688f13f358263426d5d244fe925f4526" Feb 16 14:08:58 crc kubenswrapper[4812]: I0216 14:08:58.188704 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f0ca4e08f10fe2d769134ab2683868a688f13f358263426d5d244fe925f4526"} err="failed to get container status \"8f0ca4e08f10fe2d769134ab2683868a688f13f358263426d5d244fe925f4526\": rpc error: code = NotFound desc = could not find container \"8f0ca4e08f10fe2d769134ab2683868a688f13f358263426d5d244fe925f4526\": container with ID starting with 8f0ca4e08f10fe2d769134ab2683868a688f13f358263426d5d244fe925f4526 not found: ID does not exist" Feb 16 14:08:58 crc kubenswrapper[4812]: I0216 14:08:58.188747 4812 scope.go:117] "RemoveContainer" containerID="660c1578bb7ecafac3be30c8de322869e5464919bad87f13086678a9a5418865" Feb 16 14:08:58 crc kubenswrapper[4812]: E0216 14:08:58.189106 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"660c1578bb7ecafac3be30c8de322869e5464919bad87f13086678a9a5418865\": container with ID starting with 660c1578bb7ecafac3be30c8de322869e5464919bad87f13086678a9a5418865 not found: ID does not exist" containerID="660c1578bb7ecafac3be30c8de322869e5464919bad87f13086678a9a5418865" Feb 16 14:08:58 crc kubenswrapper[4812]: I0216 14:08:58.189136 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"660c1578bb7ecafac3be30c8de322869e5464919bad87f13086678a9a5418865"} err="failed to get container status \"660c1578bb7ecafac3be30c8de322869e5464919bad87f13086678a9a5418865\": rpc error: code = NotFound desc = could not find container \"660c1578bb7ecafac3be30c8de322869e5464919bad87f13086678a9a5418865\": container with ID starting with 660c1578bb7ecafac3be30c8de322869e5464919bad87f13086678a9a5418865 not found: ID does not exist" Feb 16 14:08:58 crc kubenswrapper[4812]: I0216 14:08:58.203098 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca2acad8-cf0e-4e0e-baac-3c114869369b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:08:58 crc kubenswrapper[4812]: I0216 14:08:58.394051 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fdbd7"] Feb 16 14:08:58 crc kubenswrapper[4812]: I0216 14:08:58.402683 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fdbd7"] Feb 16 14:08:59 crc kubenswrapper[4812]: I0216 14:08:59.895362 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca2acad8-cf0e-4e0e-baac-3c114869369b" path="/var/lib/kubelet/pods/ca2acad8-cf0e-4e0e-baac-3c114869369b/volumes" Feb 16 14:09:07 crc kubenswrapper[4812]: E0216 14:09:07.882016 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:09:08 crc kubenswrapper[4812]: I0216 14:09:08.840631 4812 scope.go:117] "RemoveContainer" containerID="4af6b842f6da140a89f3af5019348860b68855e3bb018ba6d2d7b598b72ca632" Feb 16 14:09:14 crc kubenswrapper[4812]: I0216 14:09:14.548966 4812 patch_prober.go:28] interesting pod/machine-config-daemon-c6mn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:09:14 crc kubenswrapper[4812]: I0216 14:09:14.549932 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:09:18 crc kubenswrapper[4812]: E0216 14:09:18.885635 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:09:33 crc kubenswrapper[4812]: E0216 14:09:33.882314 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:09:44 crc kubenswrapper[4812]: I0216 14:09:44.553068 4812 patch_prober.go:28] interesting pod/machine-config-daemon-c6mn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:09:44 crc kubenswrapper[4812]: I0216 14:09:44.554150 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:09:44 crc kubenswrapper[4812]: I0216 14:09:44.554249 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" Feb 16 14:09:44 crc kubenswrapper[4812]: I0216 14:09:44.558669 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f3cacf320a72106f500d0d2c29eab121eed8c0bdd3f2aa2484da67d29caa2a70"} pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 14:09:44 crc kubenswrapper[4812]: I0216 14:09:44.558808 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" containerID="cri-o://f3cacf320a72106f500d0d2c29eab121eed8c0bdd3f2aa2484da67d29caa2a70" gracePeriod=600 Feb 16 14:09:44 crc kubenswrapper[4812]: E0216 14:09:44.690756 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:09:44 crc kubenswrapper[4812]: I0216 14:09:44.831343 4812 generic.go:334] "Generic (PLEG): container finished" podID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerID="f3cacf320a72106f500d0d2c29eab121eed8c0bdd3f2aa2484da67d29caa2a70" exitCode=0 Feb 16 14:09:44 crc kubenswrapper[4812]: I0216 14:09:44.831408 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" event={"ID":"3c55e49a-a30d-4950-a690-c33d9f8a31e0","Type":"ContainerDied","Data":"f3cacf320a72106f500d0d2c29eab121eed8c0bdd3f2aa2484da67d29caa2a70"} Feb 16 14:09:44 crc kubenswrapper[4812]: I0216 14:09:44.831474 4812 scope.go:117] "RemoveContainer" containerID="2537a5668451bbc3263438cdeabe941020140f9d71754aa3ed0e0ff1820e5ccc" Feb 16 14:09:44 crc kubenswrapper[4812]: I0216 14:09:44.833927 4812 scope.go:117] "RemoveContainer" containerID="f3cacf320a72106f500d0d2c29eab121eed8c0bdd3f2aa2484da67d29caa2a70" Feb 16 14:09:44 crc kubenswrapper[4812]: E0216 14:09:44.834704 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:09:46 crc kubenswrapper[4812]: E0216 14:09:46.883483 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:09:55 crc kubenswrapper[4812]: I0216 14:09:55.880016 4812 scope.go:117] "RemoveContainer" containerID="f3cacf320a72106f500d0d2c29eab121eed8c0bdd3f2aa2484da67d29caa2a70" Feb 16 14:09:55 crc kubenswrapper[4812]: E0216 14:09:55.881006 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:10:01 crc kubenswrapper[4812]: E0216 14:10:01.888840 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:10:06 crc kubenswrapper[4812]: I0216 14:10:06.352500 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v48nm"] Feb 16 14:10:06 crc kubenswrapper[4812]: E0216 14:10:06.353517 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca2acad8-cf0e-4e0e-baac-3c114869369b" containerName="extract-utilities" Feb 16 14:10:06 crc kubenswrapper[4812]: I0216 14:10:06.353543 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca2acad8-cf0e-4e0e-baac-3c114869369b" containerName="extract-utilities" Feb 16 14:10:06 crc kubenswrapper[4812]: E0216 14:10:06.353578 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca2acad8-cf0e-4e0e-baac-3c114869369b" containerName="extract-content" Feb 16 14:10:06 crc kubenswrapper[4812]: I0216 14:10:06.353585 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca2acad8-cf0e-4e0e-baac-3c114869369b" containerName="extract-content" Feb 16 14:10:06 crc kubenswrapper[4812]: E0216 14:10:06.353600 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca2acad8-cf0e-4e0e-baac-3c114869369b" containerName="registry-server" Feb 16 14:10:06 crc kubenswrapper[4812]: I0216 14:10:06.353606 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca2acad8-cf0e-4e0e-baac-3c114869369b" containerName="registry-server" Feb 16 14:10:06 crc kubenswrapper[4812]: I0216 14:10:06.353885 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca2acad8-cf0e-4e0e-baac-3c114869369b" containerName="registry-server" Feb 16 14:10:06 crc kubenswrapper[4812]: I0216 14:10:06.355685 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v48nm" Feb 16 14:10:06 crc kubenswrapper[4812]: I0216 14:10:06.369163 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v48nm"] Feb 16 14:10:06 crc kubenswrapper[4812]: I0216 14:10:06.526606 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c64a4a8-7489-4d68-8601-96092bb0e72f-catalog-content\") pod \"redhat-operators-v48nm\" (UID: \"5c64a4a8-7489-4d68-8601-96092bb0e72f\") " pod="openshift-marketplace/redhat-operators-v48nm" Feb 16 14:10:06 crc kubenswrapper[4812]: I0216 14:10:06.526958 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c64a4a8-7489-4d68-8601-96092bb0e72f-utilities\") pod \"redhat-operators-v48nm\" (UID: \"5c64a4a8-7489-4d68-8601-96092bb0e72f\") " pod="openshift-marketplace/redhat-operators-v48nm" Feb 16 14:10:06 crc kubenswrapper[4812]: I0216 14:10:06.527070 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9p6c\" (UniqueName: \"kubernetes.io/projected/5c64a4a8-7489-4d68-8601-96092bb0e72f-kube-api-access-k9p6c\") pod \"redhat-operators-v48nm\" (UID: \"5c64a4a8-7489-4d68-8601-96092bb0e72f\") " pod="openshift-marketplace/redhat-operators-v48nm" Feb 16 14:10:06 crc kubenswrapper[4812]: I0216 14:10:06.628717 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c64a4a8-7489-4d68-8601-96092bb0e72f-catalog-content\") pod \"redhat-operators-v48nm\" (UID: \"5c64a4a8-7489-4d68-8601-96092bb0e72f\") " pod="openshift-marketplace/redhat-operators-v48nm" Feb 16 14:10:06 crc kubenswrapper[4812]: I0216 14:10:06.628794 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c64a4a8-7489-4d68-8601-96092bb0e72f-utilities\") pod \"redhat-operators-v48nm\" (UID: \"5c64a4a8-7489-4d68-8601-96092bb0e72f\") " pod="openshift-marketplace/redhat-operators-v48nm" Feb 16 14:10:06 crc kubenswrapper[4812]: I0216 14:10:06.628825 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9p6c\" (UniqueName: \"kubernetes.io/projected/5c64a4a8-7489-4d68-8601-96092bb0e72f-kube-api-access-k9p6c\") pod \"redhat-operators-v48nm\" (UID: \"5c64a4a8-7489-4d68-8601-96092bb0e72f\") " pod="openshift-marketplace/redhat-operators-v48nm" Feb 16 14:10:06 crc kubenswrapper[4812]: I0216 14:10:06.629572 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c64a4a8-7489-4d68-8601-96092bb0e72f-utilities\") pod \"redhat-operators-v48nm\" (UID: \"5c64a4a8-7489-4d68-8601-96092bb0e72f\") " pod="openshift-marketplace/redhat-operators-v48nm" Feb 16 14:10:06 crc kubenswrapper[4812]: I0216 14:10:06.629659 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c64a4a8-7489-4d68-8601-96092bb0e72f-catalog-content\") pod \"redhat-operators-v48nm\" (UID: \"5c64a4a8-7489-4d68-8601-96092bb0e72f\") " pod="openshift-marketplace/redhat-operators-v48nm" Feb 16 14:10:06 crc kubenswrapper[4812]: I0216 14:10:06.655687 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9p6c\" (UniqueName: \"kubernetes.io/projected/5c64a4a8-7489-4d68-8601-96092bb0e72f-kube-api-access-k9p6c\") pod \"redhat-operators-v48nm\" (UID: \"5c64a4a8-7489-4d68-8601-96092bb0e72f\") " pod="openshift-marketplace/redhat-operators-v48nm" Feb 16 14:10:06 crc kubenswrapper[4812]: I0216 14:10:06.718728 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v48nm" Feb 16 14:10:07 crc kubenswrapper[4812]: I0216 14:10:07.272751 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v48nm"] Feb 16 14:10:07 crc kubenswrapper[4812]: I0216 14:10:07.452542 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v48nm" event={"ID":"5c64a4a8-7489-4d68-8601-96092bb0e72f","Type":"ContainerStarted","Data":"12eba7f16b1cbbd7453b32019a08d1af5b61c8ca0e9d2104d3d9561e7478dfca"} Feb 16 14:10:08 crc kubenswrapper[4812]: I0216 14:10:08.481758 4812 generic.go:334] "Generic (PLEG): container finished" podID="5c64a4a8-7489-4d68-8601-96092bb0e72f" containerID="074a9502998dd3d7a0473014bc092e49d6b4efa7b2c3ee800787e712cc610475" exitCode=0 Feb 16 14:10:08 crc kubenswrapper[4812]: I0216 14:10:08.481886 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v48nm" event={"ID":"5c64a4a8-7489-4d68-8601-96092bb0e72f","Type":"ContainerDied","Data":"074a9502998dd3d7a0473014bc092e49d6b4efa7b2c3ee800787e712cc610475"} Feb 16 14:10:08 crc kubenswrapper[4812]: I0216 14:10:08.484349 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 14:10:10 crc kubenswrapper[4812]: I0216 14:10:10.506802 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v48nm" event={"ID":"5c64a4a8-7489-4d68-8601-96092bb0e72f","Type":"ContainerStarted","Data":"42fb8ef570d2f6c6cf450550ba4667fe93f9e409816bb66118b59722f35bf66a"} Feb 16 14:10:10 crc kubenswrapper[4812]: I0216 14:10:10.881435 4812 scope.go:117] "RemoveContainer" containerID="f3cacf320a72106f500d0d2c29eab121eed8c0bdd3f2aa2484da67d29caa2a70" Feb 16 14:10:10 crc kubenswrapper[4812]: E0216 14:10:10.881843 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:10:12 crc kubenswrapper[4812]: I0216 14:10:12.538083 4812 generic.go:334] "Generic (PLEG): container finished" podID="5c64a4a8-7489-4d68-8601-96092bb0e72f" containerID="42fb8ef570d2f6c6cf450550ba4667fe93f9e409816bb66118b59722f35bf66a" exitCode=0 Feb 16 14:10:12 crc kubenswrapper[4812]: I0216 14:10:12.538186 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v48nm" event={"ID":"5c64a4a8-7489-4d68-8601-96092bb0e72f","Type":"ContainerDied","Data":"42fb8ef570d2f6c6cf450550ba4667fe93f9e409816bb66118b59722f35bf66a"} Feb 16 14:10:13 crc kubenswrapper[4812]: I0216 14:10:13.553710 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v48nm" event={"ID":"5c64a4a8-7489-4d68-8601-96092bb0e72f","Type":"ContainerStarted","Data":"84b11e73be9e42a178504ecd97cd3d5f1751f0f8680eaa7ee796df9d73940042"} Feb 16 14:10:13 crc kubenswrapper[4812]: I0216 14:10:13.583860 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v48nm" podStartSLOduration=3.108350904 podStartE2EDuration="7.58378308s" podCreationTimestamp="2026-02-16 14:10:06 +0000 UTC" firstStartedPulling="2026-02-16 14:10:08.483902013 +0000 UTC m=+2297.548232714" lastFinishedPulling="2026-02-16 14:10:12.959334189 +0000 UTC m=+2302.023664890" observedRunningTime="2026-02-16 14:10:13.576150342 +0000 UTC m=+2302.640481063" watchObservedRunningTime="2026-02-16 14:10:13.58378308 +0000 UTC m=+2302.648113781" Feb 16 14:10:16 crc kubenswrapper[4812]: I0216 14:10:16.719027 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v48nm" Feb 16 14:10:16 crc kubenswrapper[4812]: I0216 14:10:16.719614 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v48nm" Feb 16 14:10:17 crc kubenswrapper[4812]: E0216 14:10:17.172493 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 14:10:17 crc kubenswrapper[4812]: E0216 14:10:17.172699 4812 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 14:10:17 crc kubenswrapper[4812]: E0216 14:10:17.173044 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mq54s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-krnzs_openstack(a7d4eae6-781f-4675-a6c3-ee0f1589c735): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 14:10:17 crc kubenswrapper[4812]: E0216 14:10:17.174707 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:10:17 crc kubenswrapper[4812]: I0216 14:10:17.777676 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v48nm" podUID="5c64a4a8-7489-4d68-8601-96092bb0e72f" containerName="registry-server" probeResult="failure" output=< Feb 16 14:10:17 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Feb 16 14:10:17 crc kubenswrapper[4812]: > Feb 16 14:10:25 crc kubenswrapper[4812]: I0216 14:10:25.879746 4812 scope.go:117] "RemoveContainer" containerID="f3cacf320a72106f500d0d2c29eab121eed8c0bdd3f2aa2484da67d29caa2a70" Feb 16 14:10:25 crc kubenswrapper[4812]: E0216 14:10:25.882282 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:10:27 crc kubenswrapper[4812]: I0216 14:10:27.771052 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v48nm" podUID="5c64a4a8-7489-4d68-8601-96092bb0e72f" containerName="registry-server" probeResult="failure" output=< Feb 16 14:10:27 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Feb 16 14:10:27 crc kubenswrapper[4812]: > Feb 16 14:10:29 crc kubenswrapper[4812]: E0216 14:10:29.883760 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:10:36 crc kubenswrapper[4812]: I0216 14:10:36.788834 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v48nm" Feb 16 14:10:36 crc kubenswrapper[4812]: I0216 14:10:36.854806 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v48nm" Feb 16 14:10:37 crc kubenswrapper[4812]: I0216 14:10:37.548757 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v48nm"] Feb 16 14:10:38 crc kubenswrapper[4812]: I0216 14:10:38.814215 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v48nm" podUID="5c64a4a8-7489-4d68-8601-96092bb0e72f" containerName="registry-server" containerID="cri-o://84b11e73be9e42a178504ecd97cd3d5f1751f0f8680eaa7ee796df9d73940042" gracePeriod=2 Feb 16 14:10:39 crc kubenswrapper[4812]: I0216 14:10:39.329250 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v48nm" Feb 16 14:10:39 crc kubenswrapper[4812]: I0216 14:10:39.417778 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9p6c\" (UniqueName: \"kubernetes.io/projected/5c64a4a8-7489-4d68-8601-96092bb0e72f-kube-api-access-k9p6c\") pod \"5c64a4a8-7489-4d68-8601-96092bb0e72f\" (UID: \"5c64a4a8-7489-4d68-8601-96092bb0e72f\") " Feb 16 14:10:39 crc kubenswrapper[4812]: I0216 14:10:39.417904 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c64a4a8-7489-4d68-8601-96092bb0e72f-utilities\") pod \"5c64a4a8-7489-4d68-8601-96092bb0e72f\" (UID: \"5c64a4a8-7489-4d68-8601-96092bb0e72f\") " Feb 16 14:10:39 crc kubenswrapper[4812]: I0216 14:10:39.417984 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c64a4a8-7489-4d68-8601-96092bb0e72f-catalog-content\") pod \"5c64a4a8-7489-4d68-8601-96092bb0e72f\" (UID: \"5c64a4a8-7489-4d68-8601-96092bb0e72f\") " Feb 16 14:10:39 crc kubenswrapper[4812]: I0216 14:10:39.420670 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c64a4a8-7489-4d68-8601-96092bb0e72f-utilities" (OuterVolumeSpecName: "utilities") pod "5c64a4a8-7489-4d68-8601-96092bb0e72f" (UID: "5c64a4a8-7489-4d68-8601-96092bb0e72f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:10:39 crc kubenswrapper[4812]: I0216 14:10:39.428806 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c64a4a8-7489-4d68-8601-96092bb0e72f-kube-api-access-k9p6c" (OuterVolumeSpecName: "kube-api-access-k9p6c") pod "5c64a4a8-7489-4d68-8601-96092bb0e72f" (UID: "5c64a4a8-7489-4d68-8601-96092bb0e72f"). InnerVolumeSpecName "kube-api-access-k9p6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:10:39 crc kubenswrapper[4812]: I0216 14:10:39.522371 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9p6c\" (UniqueName: \"kubernetes.io/projected/5c64a4a8-7489-4d68-8601-96092bb0e72f-kube-api-access-k9p6c\") on node \"crc\" DevicePath \"\"" Feb 16 14:10:39 crc kubenswrapper[4812]: I0216 14:10:39.522420 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c64a4a8-7489-4d68-8601-96092bb0e72f-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:10:39 crc kubenswrapper[4812]: I0216 14:10:39.577555 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c64a4a8-7489-4d68-8601-96092bb0e72f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5c64a4a8-7489-4d68-8601-96092bb0e72f" (UID: "5c64a4a8-7489-4d68-8601-96092bb0e72f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:10:39 crc kubenswrapper[4812]: I0216 14:10:39.624747 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c64a4a8-7489-4d68-8601-96092bb0e72f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:10:39 crc kubenswrapper[4812]: I0216 14:10:39.829108 4812 generic.go:334] "Generic (PLEG): container finished" podID="5c64a4a8-7489-4d68-8601-96092bb0e72f" containerID="84b11e73be9e42a178504ecd97cd3d5f1751f0f8680eaa7ee796df9d73940042" exitCode=0 Feb 16 14:10:39 crc kubenswrapper[4812]: I0216 14:10:39.829164 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v48nm" event={"ID":"5c64a4a8-7489-4d68-8601-96092bb0e72f","Type":"ContainerDied","Data":"84b11e73be9e42a178504ecd97cd3d5f1751f0f8680eaa7ee796df9d73940042"} Feb 16 14:10:39 crc kubenswrapper[4812]: I0216 14:10:39.829200 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v48nm" event={"ID":"5c64a4a8-7489-4d68-8601-96092bb0e72f","Type":"ContainerDied","Data":"12eba7f16b1cbbd7453b32019a08d1af5b61c8ca0e9d2104d3d9561e7478dfca"} Feb 16 14:10:39 crc kubenswrapper[4812]: I0216 14:10:39.829224 4812 scope.go:117] "RemoveContainer" containerID="84b11e73be9e42a178504ecd97cd3d5f1751f0f8680eaa7ee796df9d73940042" Feb 16 14:10:39 crc kubenswrapper[4812]: I0216 14:10:39.829376 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v48nm" Feb 16 14:10:39 crc kubenswrapper[4812]: I0216 14:10:39.857149 4812 scope.go:117] "RemoveContainer" containerID="42fb8ef570d2f6c6cf450550ba4667fe93f9e409816bb66118b59722f35bf66a" Feb 16 14:10:39 crc kubenswrapper[4812]: I0216 14:10:39.899695 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v48nm"] Feb 16 14:10:39 crc kubenswrapper[4812]: I0216 14:10:39.899769 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v48nm"] Feb 16 14:10:39 crc kubenswrapper[4812]: I0216 14:10:39.905877 4812 scope.go:117] "RemoveContainer" containerID="074a9502998dd3d7a0473014bc092e49d6b4efa7b2c3ee800787e712cc610475" Feb 16 14:10:39 crc kubenswrapper[4812]: I0216 14:10:39.959033 4812 scope.go:117] "RemoveContainer" containerID="84b11e73be9e42a178504ecd97cd3d5f1751f0f8680eaa7ee796df9d73940042" Feb 16 14:10:39 crc kubenswrapper[4812]: E0216 14:10:39.959795 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84b11e73be9e42a178504ecd97cd3d5f1751f0f8680eaa7ee796df9d73940042\": container with ID starting with 84b11e73be9e42a178504ecd97cd3d5f1751f0f8680eaa7ee796df9d73940042 not found: ID does not exist" containerID="84b11e73be9e42a178504ecd97cd3d5f1751f0f8680eaa7ee796df9d73940042" Feb 16 14:10:39 crc kubenswrapper[4812]: I0216 14:10:39.959854 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84b11e73be9e42a178504ecd97cd3d5f1751f0f8680eaa7ee796df9d73940042"} err="failed to get container status \"84b11e73be9e42a178504ecd97cd3d5f1751f0f8680eaa7ee796df9d73940042\": rpc error: code = NotFound desc = could not find container \"84b11e73be9e42a178504ecd97cd3d5f1751f0f8680eaa7ee796df9d73940042\": container with ID starting with 84b11e73be9e42a178504ecd97cd3d5f1751f0f8680eaa7ee796df9d73940042 not found: ID does not exist" Feb 16 14:10:39 crc kubenswrapper[4812]: I0216 14:10:39.959887 4812 scope.go:117] "RemoveContainer" containerID="42fb8ef570d2f6c6cf450550ba4667fe93f9e409816bb66118b59722f35bf66a" Feb 16 14:10:39 crc kubenswrapper[4812]: E0216 14:10:39.961275 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42fb8ef570d2f6c6cf450550ba4667fe93f9e409816bb66118b59722f35bf66a\": container with ID starting with 42fb8ef570d2f6c6cf450550ba4667fe93f9e409816bb66118b59722f35bf66a not found: ID does not exist" containerID="42fb8ef570d2f6c6cf450550ba4667fe93f9e409816bb66118b59722f35bf66a" Feb 16 14:10:39 crc kubenswrapper[4812]: I0216 14:10:39.961325 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42fb8ef570d2f6c6cf450550ba4667fe93f9e409816bb66118b59722f35bf66a"} err="failed to get container status \"42fb8ef570d2f6c6cf450550ba4667fe93f9e409816bb66118b59722f35bf66a\": rpc error: code = NotFound desc = could not find container \"42fb8ef570d2f6c6cf450550ba4667fe93f9e409816bb66118b59722f35bf66a\": container with ID starting with 42fb8ef570d2f6c6cf450550ba4667fe93f9e409816bb66118b59722f35bf66a not found: ID does not exist" Feb 16 14:10:39 crc kubenswrapper[4812]: I0216 14:10:39.961360 4812 scope.go:117] "RemoveContainer" containerID="074a9502998dd3d7a0473014bc092e49d6b4efa7b2c3ee800787e712cc610475" Feb 16 14:10:39 crc kubenswrapper[4812]: E0216 14:10:39.962105 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"074a9502998dd3d7a0473014bc092e49d6b4efa7b2c3ee800787e712cc610475\": container with ID starting with 074a9502998dd3d7a0473014bc092e49d6b4efa7b2c3ee800787e712cc610475 not found: ID does not exist" containerID="074a9502998dd3d7a0473014bc092e49d6b4efa7b2c3ee800787e712cc610475" Feb 16 14:10:39 crc kubenswrapper[4812]: I0216 14:10:39.962133 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"074a9502998dd3d7a0473014bc092e49d6b4efa7b2c3ee800787e712cc610475"} err="failed to get container status \"074a9502998dd3d7a0473014bc092e49d6b4efa7b2c3ee800787e712cc610475\": rpc error: code = NotFound desc = could not find container \"074a9502998dd3d7a0473014bc092e49d6b4efa7b2c3ee800787e712cc610475\": container with ID starting with 074a9502998dd3d7a0473014bc092e49d6b4efa7b2c3ee800787e712cc610475 not found: ID does not exist" Feb 16 14:10:40 crc kubenswrapper[4812]: I0216 14:10:40.880305 4812 scope.go:117] "RemoveContainer" containerID="f3cacf320a72106f500d0d2c29eab121eed8c0bdd3f2aa2484da67d29caa2a70" Feb 16 14:10:40 crc kubenswrapper[4812]: E0216 14:10:40.881028 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:10:41 crc kubenswrapper[4812]: I0216 14:10:41.892291 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c64a4a8-7489-4d68-8601-96092bb0e72f" path="/var/lib/kubelet/pods/5c64a4a8-7489-4d68-8601-96092bb0e72f/volumes" Feb 16 14:10:42 crc kubenswrapper[4812]: E0216 14:10:42.882430 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:10:52 crc kubenswrapper[4812]: I0216 14:10:52.881098 4812 scope.go:117] "RemoveContainer" containerID="f3cacf320a72106f500d0d2c29eab121eed8c0bdd3f2aa2484da67d29caa2a70" Feb 16 14:10:52 crc kubenswrapper[4812]: E0216 14:10:52.882235 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:10:57 crc kubenswrapper[4812]: E0216 14:10:57.883278 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:11:03 crc kubenswrapper[4812]: I0216 14:11:03.880156 4812 scope.go:117] "RemoveContainer" containerID="f3cacf320a72106f500d0d2c29eab121eed8c0bdd3f2aa2484da67d29caa2a70" Feb 16 14:11:03 crc kubenswrapper[4812]: E0216 14:11:03.881245 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:11:10 crc kubenswrapper[4812]: E0216 14:11:10.882513 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:11:17 crc kubenswrapper[4812]: I0216 14:11:17.880085 4812 scope.go:117] "RemoveContainer" containerID="f3cacf320a72106f500d0d2c29eab121eed8c0bdd3f2aa2484da67d29caa2a70" Feb 16 14:11:17 crc kubenswrapper[4812]: E0216 14:11:17.881196 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:11:21 crc kubenswrapper[4812]: E0216 14:11:21.890089 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:11:30 crc kubenswrapper[4812]: I0216 14:11:30.880522 4812 scope.go:117] "RemoveContainer" containerID="f3cacf320a72106f500d0d2c29eab121eed8c0bdd3f2aa2484da67d29caa2a70" Feb 16 14:11:30 crc kubenswrapper[4812]: E0216 14:11:30.881667 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:11:34 crc kubenswrapper[4812]: E0216 14:11:34.882893 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:11:45 crc kubenswrapper[4812]: I0216 14:11:45.880691 4812 scope.go:117] "RemoveContainer" containerID="f3cacf320a72106f500d0d2c29eab121eed8c0bdd3f2aa2484da67d29caa2a70" Feb 16 14:11:45 crc kubenswrapper[4812]: E0216 14:11:45.882211 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:11:45 crc kubenswrapper[4812]: E0216 14:11:45.884158 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:11:58 crc kubenswrapper[4812]: I0216 14:11:58.879416 4812 scope.go:117] "RemoveContainer" containerID="f3cacf320a72106f500d0d2c29eab121eed8c0bdd3f2aa2484da67d29caa2a70" Feb 16 14:11:58 crc kubenswrapper[4812]: E0216 14:11:58.880609 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:11:59 crc kubenswrapper[4812]: E0216 14:11:59.883370 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:12:13 crc kubenswrapper[4812]: I0216 14:12:13.879725 4812 scope.go:117] "RemoveContainer" containerID="f3cacf320a72106f500d0d2c29eab121eed8c0bdd3f2aa2484da67d29caa2a70" Feb 16 14:12:13 crc kubenswrapper[4812]: E0216 14:12:13.880991 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:12:14 crc kubenswrapper[4812]: E0216 14:12:14.881425 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:12:24 crc kubenswrapper[4812]: I0216 14:12:24.880306 4812 scope.go:117] "RemoveContainer" containerID="f3cacf320a72106f500d0d2c29eab121eed8c0bdd3f2aa2484da67d29caa2a70" Feb 16 14:12:24 crc kubenswrapper[4812]: E0216 14:12:24.881475 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:12:26 crc kubenswrapper[4812]: E0216 14:12:26.882903 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:12:36 crc kubenswrapper[4812]: I0216 14:12:36.879357 4812 scope.go:117] "RemoveContainer" containerID="f3cacf320a72106f500d0d2c29eab121eed8c0bdd3f2aa2484da67d29caa2a70" Feb 16 14:12:36 crc kubenswrapper[4812]: E0216 14:12:36.880594 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:12:39 crc kubenswrapper[4812]: E0216 14:12:39.882923 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:12:51 crc kubenswrapper[4812]: I0216 14:12:51.887485 4812 scope.go:117] "RemoveContainer" containerID="f3cacf320a72106f500d0d2c29eab121eed8c0bdd3f2aa2484da67d29caa2a70" Feb 16 14:12:51 crc kubenswrapper[4812]: E0216 14:12:51.889323 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:12:54 crc kubenswrapper[4812]: E0216 14:12:54.882555 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:13:03 crc kubenswrapper[4812]: I0216 14:13:03.880095 4812 scope.go:117] "RemoveContainer" containerID="f3cacf320a72106f500d0d2c29eab121eed8c0bdd3f2aa2484da67d29caa2a70" Feb 16 14:13:03 crc kubenswrapper[4812]: E0216 14:13:03.881703 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:13:09 crc kubenswrapper[4812]: E0216 14:13:09.883662 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:13:15 crc kubenswrapper[4812]: I0216 14:13:15.540169 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mvq4h"] Feb 16 14:13:15 crc kubenswrapper[4812]: E0216 14:13:15.543173 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c64a4a8-7489-4d68-8601-96092bb0e72f" containerName="registry-server" Feb 16 14:13:15 crc kubenswrapper[4812]: I0216 14:13:15.543339 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c64a4a8-7489-4d68-8601-96092bb0e72f" containerName="registry-server" Feb 16 14:13:15 crc kubenswrapper[4812]: E0216 14:13:15.543419 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c64a4a8-7489-4d68-8601-96092bb0e72f" containerName="extract-utilities" Feb 16 14:13:15 crc kubenswrapper[4812]: I0216 14:13:15.543496 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c64a4a8-7489-4d68-8601-96092bb0e72f" containerName="extract-utilities" Feb 16 14:13:15 crc kubenswrapper[4812]: E0216 14:13:15.543585 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c64a4a8-7489-4d68-8601-96092bb0e72f" containerName="extract-content" Feb 16 14:13:15 crc kubenswrapper[4812]: I0216 14:13:15.543640 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c64a4a8-7489-4d68-8601-96092bb0e72f" containerName="extract-content" Feb 16 14:13:15 crc kubenswrapper[4812]: I0216 14:13:15.543881 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c64a4a8-7489-4d68-8601-96092bb0e72f" containerName="registry-server" Feb 16 14:13:15 crc kubenswrapper[4812]: I0216 14:13:15.547793 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mvq4h" Feb 16 14:13:15 crc kubenswrapper[4812]: I0216 14:13:15.561922 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mvq4h"] Feb 16 14:13:15 crc kubenswrapper[4812]: I0216 14:13:15.746524 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx4m9\" (UniqueName: \"kubernetes.io/projected/ebd6a20f-bc36-4691-abb5-f0f503274525-kube-api-access-tx4m9\") pod \"certified-operators-mvq4h\" (UID: \"ebd6a20f-bc36-4691-abb5-f0f503274525\") " pod="openshift-marketplace/certified-operators-mvq4h" Feb 16 14:13:15 crc kubenswrapper[4812]: I0216 14:13:15.747138 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebd6a20f-bc36-4691-abb5-f0f503274525-utilities\") pod \"certified-operators-mvq4h\" (UID: \"ebd6a20f-bc36-4691-abb5-f0f503274525\") " pod="openshift-marketplace/certified-operators-mvq4h" Feb 16 14:13:15 crc kubenswrapper[4812]: I0216 14:13:15.747243 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebd6a20f-bc36-4691-abb5-f0f503274525-catalog-content\") pod \"certified-operators-mvq4h\" (UID: \"ebd6a20f-bc36-4691-abb5-f0f503274525\") " pod="openshift-marketplace/certified-operators-mvq4h" Feb 16 14:13:15 crc kubenswrapper[4812]: I0216 14:13:15.850584 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebd6a20f-bc36-4691-abb5-f0f503274525-catalog-content\") pod \"certified-operators-mvq4h\" (UID: \"ebd6a20f-bc36-4691-abb5-f0f503274525\") " pod="openshift-marketplace/certified-operators-mvq4h" Feb 16 14:13:15 crc kubenswrapper[4812]: I0216 14:13:15.851101 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx4m9\" (UniqueName: \"kubernetes.io/projected/ebd6a20f-bc36-4691-abb5-f0f503274525-kube-api-access-tx4m9\") pod \"certified-operators-mvq4h\" (UID: \"ebd6a20f-bc36-4691-abb5-f0f503274525\") " pod="openshift-marketplace/certified-operators-mvq4h" Feb 16 14:13:15 crc kubenswrapper[4812]: I0216 14:13:15.851303 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebd6a20f-bc36-4691-abb5-f0f503274525-utilities\") pod \"certified-operators-mvq4h\" (UID: \"ebd6a20f-bc36-4691-abb5-f0f503274525\") " pod="openshift-marketplace/certified-operators-mvq4h" Feb 16 14:13:15 crc kubenswrapper[4812]: I0216 14:13:15.851366 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebd6a20f-bc36-4691-abb5-f0f503274525-catalog-content\") pod \"certified-operators-mvq4h\" (UID: \"ebd6a20f-bc36-4691-abb5-f0f503274525\") " pod="openshift-marketplace/certified-operators-mvq4h" Feb 16 14:13:15 crc kubenswrapper[4812]: I0216 14:13:15.853135 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebd6a20f-bc36-4691-abb5-f0f503274525-utilities\") pod \"certified-operators-mvq4h\" (UID: \"ebd6a20f-bc36-4691-abb5-f0f503274525\") " pod="openshift-marketplace/certified-operators-mvq4h" Feb 16 14:13:15 crc kubenswrapper[4812]: I0216 14:13:15.880586 4812 scope.go:117] "RemoveContainer" containerID="f3cacf320a72106f500d0d2c29eab121eed8c0bdd3f2aa2484da67d29caa2a70" Feb 16 14:13:15 crc kubenswrapper[4812]: E0216 14:13:15.881169 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:13:15 crc kubenswrapper[4812]: I0216 14:13:15.881953 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx4m9\" (UniqueName: \"kubernetes.io/projected/ebd6a20f-bc36-4691-abb5-f0f503274525-kube-api-access-tx4m9\") pod \"certified-operators-mvq4h\" (UID: \"ebd6a20f-bc36-4691-abb5-f0f503274525\") " pod="openshift-marketplace/certified-operators-mvq4h" Feb 16 14:13:15 crc kubenswrapper[4812]: I0216 14:13:15.899166 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mvq4h" Feb 16 14:13:16 crc kubenswrapper[4812]: I0216 14:13:16.365306 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mvq4h"] Feb 16 14:13:17 crc kubenswrapper[4812]: I0216 14:13:17.065058 4812 generic.go:334] "Generic (PLEG): container finished" podID="ebd6a20f-bc36-4691-abb5-f0f503274525" containerID="f0ea591a0bed0cb72e65c4f038f4451bf4271d383822bff3dd1b4a0a561d0150" exitCode=0 Feb 16 14:13:17 crc kubenswrapper[4812]: I0216 14:13:17.065167 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvq4h" event={"ID":"ebd6a20f-bc36-4691-abb5-f0f503274525","Type":"ContainerDied","Data":"f0ea591a0bed0cb72e65c4f038f4451bf4271d383822bff3dd1b4a0a561d0150"} Feb 16 14:13:17 crc kubenswrapper[4812]: I0216 14:13:17.065485 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvq4h" event={"ID":"ebd6a20f-bc36-4691-abb5-f0f503274525","Type":"ContainerStarted","Data":"e731939b972aaad76e7f4c5061c552cb99df5e9e8a6043dcb14ee9156828b58f"} Feb 16 14:13:19 crc kubenswrapper[4812]: I0216 14:13:19.087767 4812 generic.go:334] "Generic (PLEG): container finished" podID="ebd6a20f-bc36-4691-abb5-f0f503274525" containerID="563ebdb4c43675ad48ff93a0021bf260ef2c74eef5f57312ef9a92012c8bc5a3" exitCode=0 Feb 16 14:13:19 crc kubenswrapper[4812]: I0216 14:13:19.087886 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvq4h" event={"ID":"ebd6a20f-bc36-4691-abb5-f0f503274525","Type":"ContainerDied","Data":"563ebdb4c43675ad48ff93a0021bf260ef2c74eef5f57312ef9a92012c8bc5a3"} Feb 16 14:13:21 crc kubenswrapper[4812]: I0216 14:13:21.110023 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvq4h" event={"ID":"ebd6a20f-bc36-4691-abb5-f0f503274525","Type":"ContainerStarted","Data":"fd5476d01c17949bb73867b8a2d654c38c39a12acb493858773f443a937be244"} Feb 16 14:13:21 crc kubenswrapper[4812]: I0216 14:13:21.140927 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mvq4h" podStartSLOduration=3.48093325 podStartE2EDuration="6.140886661s" podCreationTimestamp="2026-02-16 14:13:15 +0000 UTC" firstStartedPulling="2026-02-16 14:13:17.067719974 +0000 UTC m=+2486.132050675" lastFinishedPulling="2026-02-16 14:13:19.727673375 +0000 UTC m=+2488.792004086" observedRunningTime="2026-02-16 14:13:21.133922986 +0000 UTC m=+2490.198253697" watchObservedRunningTime="2026-02-16 14:13:21.140886661 +0000 UTC m=+2490.205217362" Feb 16 14:13:23 crc kubenswrapper[4812]: E0216 14:13:23.884658 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:13:25 crc kubenswrapper[4812]: I0216 14:13:25.899937 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mvq4h" Feb 16 14:13:25 crc kubenswrapper[4812]: I0216 14:13:25.900776 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mvq4h" Feb 16 14:13:25 crc kubenswrapper[4812]: I0216 14:13:25.959590 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mvq4h" Feb 16 14:13:26 crc kubenswrapper[4812]: I0216 14:13:26.215642 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mvq4h" Feb 16 14:13:28 crc kubenswrapper[4812]: I0216 14:13:28.881039 4812 scope.go:117] "RemoveContainer" containerID="f3cacf320a72106f500d0d2c29eab121eed8c0bdd3f2aa2484da67d29caa2a70" Feb 16 14:13:28 crc kubenswrapper[4812]: E0216 14:13:28.884019 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:13:29 crc kubenswrapper[4812]: I0216 14:13:29.532834 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mvq4h"] Feb 16 14:13:29 crc kubenswrapper[4812]: I0216 14:13:29.533681 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mvq4h" podUID="ebd6a20f-bc36-4691-abb5-f0f503274525" containerName="registry-server" containerID="cri-o://fd5476d01c17949bb73867b8a2d654c38c39a12acb493858773f443a937be244" gracePeriod=2 Feb 16 14:13:30 crc kubenswrapper[4812]: I0216 14:13:30.051140 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mvq4h" Feb 16 14:13:30 crc kubenswrapper[4812]: I0216 14:13:30.209184 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebd6a20f-bc36-4691-abb5-f0f503274525-catalog-content\") pod \"ebd6a20f-bc36-4691-abb5-f0f503274525\" (UID: \"ebd6a20f-bc36-4691-abb5-f0f503274525\") " Feb 16 14:13:30 crc kubenswrapper[4812]: I0216 14:13:30.209329 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tx4m9\" (UniqueName: \"kubernetes.io/projected/ebd6a20f-bc36-4691-abb5-f0f503274525-kube-api-access-tx4m9\") pod \"ebd6a20f-bc36-4691-abb5-f0f503274525\" (UID: \"ebd6a20f-bc36-4691-abb5-f0f503274525\") " Feb 16 14:13:30 crc kubenswrapper[4812]: I0216 14:13:30.209360 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebd6a20f-bc36-4691-abb5-f0f503274525-utilities\") pod \"ebd6a20f-bc36-4691-abb5-f0f503274525\" (UID: \"ebd6a20f-bc36-4691-abb5-f0f503274525\") " Feb 16 14:13:30 crc kubenswrapper[4812]: I0216 14:13:30.211121 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebd6a20f-bc36-4691-abb5-f0f503274525-utilities" (OuterVolumeSpecName: "utilities") pod "ebd6a20f-bc36-4691-abb5-f0f503274525" (UID: "ebd6a20f-bc36-4691-abb5-f0f503274525"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:13:30 crc kubenswrapper[4812]: I0216 14:13:30.219050 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebd6a20f-bc36-4691-abb5-f0f503274525-kube-api-access-tx4m9" (OuterVolumeSpecName: "kube-api-access-tx4m9") pod "ebd6a20f-bc36-4691-abb5-f0f503274525" (UID: "ebd6a20f-bc36-4691-abb5-f0f503274525"). InnerVolumeSpecName "kube-api-access-tx4m9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:13:30 crc kubenswrapper[4812]: I0216 14:13:30.267662 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebd6a20f-bc36-4691-abb5-f0f503274525-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ebd6a20f-bc36-4691-abb5-f0f503274525" (UID: "ebd6a20f-bc36-4691-abb5-f0f503274525"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:13:30 crc kubenswrapper[4812]: I0216 14:13:30.314472 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebd6a20f-bc36-4691-abb5-f0f503274525-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:13:30 crc kubenswrapper[4812]: I0216 14:13:30.314519 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tx4m9\" (UniqueName: \"kubernetes.io/projected/ebd6a20f-bc36-4691-abb5-f0f503274525-kube-api-access-tx4m9\") on node \"crc\" DevicePath \"\"" Feb 16 14:13:30 crc kubenswrapper[4812]: I0216 14:13:30.314535 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebd6a20f-bc36-4691-abb5-f0f503274525-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:13:30 crc kubenswrapper[4812]: I0216 14:13:30.375556 4812 generic.go:334] "Generic (PLEG): container finished" podID="ebd6a20f-bc36-4691-abb5-f0f503274525" containerID="fd5476d01c17949bb73867b8a2d654c38c39a12acb493858773f443a937be244" exitCode=0 Feb 16 14:13:30 crc kubenswrapper[4812]: I0216 14:13:30.375640 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mvq4h" Feb 16 14:13:30 crc kubenswrapper[4812]: I0216 14:13:30.375689 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvq4h" event={"ID":"ebd6a20f-bc36-4691-abb5-f0f503274525","Type":"ContainerDied","Data":"fd5476d01c17949bb73867b8a2d654c38c39a12acb493858773f443a937be244"} Feb 16 14:13:30 crc kubenswrapper[4812]: I0216 14:13:30.375785 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvq4h" event={"ID":"ebd6a20f-bc36-4691-abb5-f0f503274525","Type":"ContainerDied","Data":"e731939b972aaad76e7f4c5061c552cb99df5e9e8a6043dcb14ee9156828b58f"} Feb 16 14:13:30 crc kubenswrapper[4812]: I0216 14:13:30.375815 4812 scope.go:117] "RemoveContainer" containerID="fd5476d01c17949bb73867b8a2d654c38c39a12acb493858773f443a937be244" Feb 16 14:13:30 crc kubenswrapper[4812]: I0216 14:13:30.414247 4812 scope.go:117] "RemoveContainer" containerID="563ebdb4c43675ad48ff93a0021bf260ef2c74eef5f57312ef9a92012c8bc5a3" Feb 16 14:13:30 crc kubenswrapper[4812]: I0216 14:13:30.442201 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mvq4h"] Feb 16 14:13:30 crc kubenswrapper[4812]: I0216 14:13:30.456983 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mvq4h"] Feb 16 14:13:30 crc kubenswrapper[4812]: I0216 14:13:30.459925 4812 scope.go:117] "RemoveContainer" containerID="f0ea591a0bed0cb72e65c4f038f4451bf4271d383822bff3dd1b4a0a561d0150" Feb 16 14:13:30 crc kubenswrapper[4812]: I0216 14:13:30.499620 4812 scope.go:117] "RemoveContainer" containerID="fd5476d01c17949bb73867b8a2d654c38c39a12acb493858773f443a937be244" Feb 16 14:13:30 crc kubenswrapper[4812]: E0216 14:13:30.500341 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd5476d01c17949bb73867b8a2d654c38c39a12acb493858773f443a937be244\": container with ID starting with fd5476d01c17949bb73867b8a2d654c38c39a12acb493858773f443a937be244 not found: ID does not exist" containerID="fd5476d01c17949bb73867b8a2d654c38c39a12acb493858773f443a937be244" Feb 16 14:13:30 crc kubenswrapper[4812]: I0216 14:13:30.500418 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd5476d01c17949bb73867b8a2d654c38c39a12acb493858773f443a937be244"} err="failed to get container status \"fd5476d01c17949bb73867b8a2d654c38c39a12acb493858773f443a937be244\": rpc error: code = NotFound desc = could not find container \"fd5476d01c17949bb73867b8a2d654c38c39a12acb493858773f443a937be244\": container with ID starting with fd5476d01c17949bb73867b8a2d654c38c39a12acb493858773f443a937be244 not found: ID does not exist" Feb 16 14:13:30 crc kubenswrapper[4812]: I0216 14:13:30.500476 4812 scope.go:117] "RemoveContainer" containerID="563ebdb4c43675ad48ff93a0021bf260ef2c74eef5f57312ef9a92012c8bc5a3" Feb 16 14:13:30 crc kubenswrapper[4812]: E0216 14:13:30.500920 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"563ebdb4c43675ad48ff93a0021bf260ef2c74eef5f57312ef9a92012c8bc5a3\": container with ID starting with 563ebdb4c43675ad48ff93a0021bf260ef2c74eef5f57312ef9a92012c8bc5a3 not found: ID does not exist" containerID="563ebdb4c43675ad48ff93a0021bf260ef2c74eef5f57312ef9a92012c8bc5a3" Feb 16 14:13:30 crc kubenswrapper[4812]: I0216 14:13:30.500989 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"563ebdb4c43675ad48ff93a0021bf260ef2c74eef5f57312ef9a92012c8bc5a3"} err="failed to get container status \"563ebdb4c43675ad48ff93a0021bf260ef2c74eef5f57312ef9a92012c8bc5a3\": rpc error: code = NotFound desc = could not find container \"563ebdb4c43675ad48ff93a0021bf260ef2c74eef5f57312ef9a92012c8bc5a3\": container with ID starting with 563ebdb4c43675ad48ff93a0021bf260ef2c74eef5f57312ef9a92012c8bc5a3 not found: ID does not exist" Feb 16 14:13:30 crc kubenswrapper[4812]: I0216 14:13:30.501033 4812 scope.go:117] "RemoveContainer" containerID="f0ea591a0bed0cb72e65c4f038f4451bf4271d383822bff3dd1b4a0a561d0150" Feb 16 14:13:30 crc kubenswrapper[4812]: E0216 14:13:30.501517 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0ea591a0bed0cb72e65c4f038f4451bf4271d383822bff3dd1b4a0a561d0150\": container with ID starting with f0ea591a0bed0cb72e65c4f038f4451bf4271d383822bff3dd1b4a0a561d0150 not found: ID does not exist" containerID="f0ea591a0bed0cb72e65c4f038f4451bf4271d383822bff3dd1b4a0a561d0150" Feb 16 14:13:30 crc kubenswrapper[4812]: I0216 14:13:30.501549 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0ea591a0bed0cb72e65c4f038f4451bf4271d383822bff3dd1b4a0a561d0150"} err="failed to get container status \"f0ea591a0bed0cb72e65c4f038f4451bf4271d383822bff3dd1b4a0a561d0150\": rpc error: code = NotFound desc = could not find container \"f0ea591a0bed0cb72e65c4f038f4451bf4271d383822bff3dd1b4a0a561d0150\": container with ID starting with f0ea591a0bed0cb72e65c4f038f4451bf4271d383822bff3dd1b4a0a561d0150 not found: ID does not exist" Feb 16 14:13:31 crc kubenswrapper[4812]: I0216 14:13:31.894686 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebd6a20f-bc36-4691-abb5-f0f503274525" path="/var/lib/kubelet/pods/ebd6a20f-bc36-4691-abb5-f0f503274525/volumes" Feb 16 14:13:37 crc kubenswrapper[4812]: E0216 14:13:37.883035 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:13:40 crc kubenswrapper[4812]: I0216 14:13:40.879288 4812 scope.go:117] "RemoveContainer" containerID="f3cacf320a72106f500d0d2c29eab121eed8c0bdd3f2aa2484da67d29caa2a70" Feb 16 14:13:40 crc kubenswrapper[4812]: E0216 14:13:40.879995 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:13:48 crc kubenswrapper[4812]: E0216 14:13:48.884018 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:13:53 crc kubenswrapper[4812]: I0216 14:13:53.879764 4812 scope.go:117] "RemoveContainer" containerID="f3cacf320a72106f500d0d2c29eab121eed8c0bdd3f2aa2484da67d29caa2a70" Feb 16 14:13:53 crc kubenswrapper[4812]: E0216 14:13:53.881194 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:14:01 crc kubenswrapper[4812]: E0216 14:14:01.888171 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:14:05 crc kubenswrapper[4812]: I0216 14:14:05.879514 4812 scope.go:117] "RemoveContainer" containerID="f3cacf320a72106f500d0d2c29eab121eed8c0bdd3f2aa2484da67d29caa2a70" Feb 16 14:14:05 crc kubenswrapper[4812]: E0216 14:14:05.880896 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:14:12 crc kubenswrapper[4812]: E0216 14:14:12.882043 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:14:18 crc kubenswrapper[4812]: I0216 14:14:18.880172 4812 scope.go:117] "RemoveContainer" containerID="f3cacf320a72106f500d0d2c29eab121eed8c0bdd3f2aa2484da67d29caa2a70" Feb 16 14:14:18 crc kubenswrapper[4812]: E0216 14:14:18.881413 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:14:23 crc kubenswrapper[4812]: E0216 14:14:23.883550 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:14:33 crc kubenswrapper[4812]: I0216 14:14:33.881495 4812 scope.go:117] "RemoveContainer" containerID="f3cacf320a72106f500d0d2c29eab121eed8c0bdd3f2aa2484da67d29caa2a70" Feb 16 14:14:33 crc kubenswrapper[4812]: E0216 14:14:33.883290 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:14:34 crc kubenswrapper[4812]: E0216 14:14:34.887172 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:14:46 crc kubenswrapper[4812]: E0216 14:14:46.883373 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:14:48 crc kubenswrapper[4812]: I0216 14:14:48.879832 4812 scope.go:117] "RemoveContainer" containerID="f3cacf320a72106f500d0d2c29eab121eed8c0bdd3f2aa2484da67d29caa2a70" Feb 16 14:14:49 crc kubenswrapper[4812]: I0216 14:14:49.538255 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" event={"ID":"3c55e49a-a30d-4950-a690-c33d9f8a31e0","Type":"ContainerStarted","Data":"50c8740afef41fa58a15dc54138d11cc9c21f246b7407cadf90dca6a16b66a65"} Feb 16 14:15:00 crc kubenswrapper[4812]: I0216 14:15:00.161546 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520855-v75fk"] Feb 16 14:15:00 crc kubenswrapper[4812]: E0216 14:15:00.162743 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebd6a20f-bc36-4691-abb5-f0f503274525" containerName="extract-utilities" Feb 16 14:15:00 crc kubenswrapper[4812]: I0216 14:15:00.162787 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebd6a20f-bc36-4691-abb5-f0f503274525" containerName="extract-utilities" Feb 16 14:15:00 crc kubenswrapper[4812]: E0216 14:15:00.162800 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebd6a20f-bc36-4691-abb5-f0f503274525" containerName="extract-content" Feb 16 14:15:00 crc kubenswrapper[4812]: I0216 14:15:00.162806 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebd6a20f-bc36-4691-abb5-f0f503274525" containerName="extract-content" Feb 16 14:15:00 crc kubenswrapper[4812]: E0216 14:15:00.162818 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebd6a20f-bc36-4691-abb5-f0f503274525" containerName="registry-server" Feb 16 14:15:00 crc kubenswrapper[4812]: I0216 14:15:00.162825 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebd6a20f-bc36-4691-abb5-f0f503274525" containerName="registry-server" Feb 16 14:15:00 crc kubenswrapper[4812]: I0216 14:15:00.163152 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebd6a20f-bc36-4691-abb5-f0f503274525" containerName="registry-server" Feb 16 14:15:00 crc kubenswrapper[4812]: I0216 14:15:00.164336 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520855-v75fk" Feb 16 14:15:00 crc kubenswrapper[4812]: I0216 14:15:00.171684 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 14:15:00 crc kubenswrapper[4812]: I0216 14:15:00.171917 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 14:15:00 crc kubenswrapper[4812]: I0216 14:15:00.197740 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520855-v75fk"] Feb 16 14:15:00 crc kubenswrapper[4812]: I0216 14:15:00.234753 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6742de02-5a05-4f8b-9f3c-5459192e0062-secret-volume\") pod \"collect-profiles-29520855-v75fk\" (UID: \"6742de02-5a05-4f8b-9f3c-5459192e0062\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520855-v75fk" Feb 16 14:15:00 crc kubenswrapper[4812]: I0216 14:15:00.235350 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6742de02-5a05-4f8b-9f3c-5459192e0062-config-volume\") pod \"collect-profiles-29520855-v75fk\" (UID: \"6742de02-5a05-4f8b-9f3c-5459192e0062\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520855-v75fk" Feb 16 14:15:00 crc kubenswrapper[4812]: I0216 14:15:00.235462 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr8b8\" (UniqueName: \"kubernetes.io/projected/6742de02-5a05-4f8b-9f3c-5459192e0062-kube-api-access-hr8b8\") pod \"collect-profiles-29520855-v75fk\" (UID: \"6742de02-5a05-4f8b-9f3c-5459192e0062\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520855-v75fk" Feb 16 14:15:00 crc kubenswrapper[4812]: I0216 14:15:00.338917 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6742de02-5a05-4f8b-9f3c-5459192e0062-secret-volume\") pod \"collect-profiles-29520855-v75fk\" (UID: \"6742de02-5a05-4f8b-9f3c-5459192e0062\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520855-v75fk" Feb 16 14:15:00 crc kubenswrapper[4812]: I0216 14:15:00.339019 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6742de02-5a05-4f8b-9f3c-5459192e0062-config-volume\") pod \"collect-profiles-29520855-v75fk\" (UID: \"6742de02-5a05-4f8b-9f3c-5459192e0062\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520855-v75fk" Feb 16 14:15:00 crc kubenswrapper[4812]: I0216 14:15:00.339377 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr8b8\" (UniqueName: \"kubernetes.io/projected/6742de02-5a05-4f8b-9f3c-5459192e0062-kube-api-access-hr8b8\") pod \"collect-profiles-29520855-v75fk\" (UID: \"6742de02-5a05-4f8b-9f3c-5459192e0062\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520855-v75fk" Feb 16 14:15:00 crc kubenswrapper[4812]: I0216 14:15:00.341244 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6742de02-5a05-4f8b-9f3c-5459192e0062-config-volume\") pod \"collect-profiles-29520855-v75fk\" (UID: \"6742de02-5a05-4f8b-9f3c-5459192e0062\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520855-v75fk" Feb 16 14:15:00 crc kubenswrapper[4812]: I0216 14:15:00.351804 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6742de02-5a05-4f8b-9f3c-5459192e0062-secret-volume\") pod \"collect-profiles-29520855-v75fk\" (UID: \"6742de02-5a05-4f8b-9f3c-5459192e0062\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520855-v75fk" Feb 16 14:15:00 crc kubenswrapper[4812]: I0216 14:15:00.360713 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr8b8\" (UniqueName: \"kubernetes.io/projected/6742de02-5a05-4f8b-9f3c-5459192e0062-kube-api-access-hr8b8\") pod \"collect-profiles-29520855-v75fk\" (UID: \"6742de02-5a05-4f8b-9f3c-5459192e0062\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520855-v75fk" Feb 16 14:15:00 crc kubenswrapper[4812]: I0216 14:15:00.505222 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520855-v75fk" Feb 16 14:15:00 crc kubenswrapper[4812]: E0216 14:15:00.882304 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:15:01 crc kubenswrapper[4812]: I0216 14:15:01.014884 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520855-v75fk"] Feb 16 14:15:01 crc kubenswrapper[4812]: W0216 14:15:01.017635 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6742de02_5a05_4f8b_9f3c_5459192e0062.slice/crio-ece8bffe8096f93bb21a41f625c955cba5d6c1fc07fb939e64a2623fe51a7b8c WatchSource:0}: Error finding container ece8bffe8096f93bb21a41f625c955cba5d6c1fc07fb939e64a2623fe51a7b8c: Status 404 returned error can't find the container with id ece8bffe8096f93bb21a41f625c955cba5d6c1fc07fb939e64a2623fe51a7b8c Feb 16 14:15:01 crc kubenswrapper[4812]: I0216 14:15:01.679710 4812 generic.go:334] "Generic (PLEG): container finished" podID="6742de02-5a05-4f8b-9f3c-5459192e0062" containerID="d73fd25d1399e3856aa318360457de8e8ef46f9347b3a0673a31c4cf51c90ae7" exitCode=0 Feb 16 14:15:01 crc kubenswrapper[4812]: I0216 14:15:01.679837 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520855-v75fk" event={"ID":"6742de02-5a05-4f8b-9f3c-5459192e0062","Type":"ContainerDied","Data":"d73fd25d1399e3856aa318360457de8e8ef46f9347b3a0673a31c4cf51c90ae7"} Feb 16 14:15:01 crc kubenswrapper[4812]: I0216 14:15:01.680115 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520855-v75fk" event={"ID":"6742de02-5a05-4f8b-9f3c-5459192e0062","Type":"ContainerStarted","Data":"ece8bffe8096f93bb21a41f625c955cba5d6c1fc07fb939e64a2623fe51a7b8c"} Feb 16 14:15:03 crc kubenswrapper[4812]: I0216 14:15:03.100697 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520855-v75fk" Feb 16 14:15:03 crc kubenswrapper[4812]: I0216 14:15:03.216926 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6742de02-5a05-4f8b-9f3c-5459192e0062-config-volume\") pod \"6742de02-5a05-4f8b-9f3c-5459192e0062\" (UID: \"6742de02-5a05-4f8b-9f3c-5459192e0062\") " Feb 16 14:15:03 crc kubenswrapper[4812]: I0216 14:15:03.217116 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr8b8\" (UniqueName: \"kubernetes.io/projected/6742de02-5a05-4f8b-9f3c-5459192e0062-kube-api-access-hr8b8\") pod \"6742de02-5a05-4f8b-9f3c-5459192e0062\" (UID: \"6742de02-5a05-4f8b-9f3c-5459192e0062\") " Feb 16 14:15:03 crc kubenswrapper[4812]: I0216 14:15:03.217337 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6742de02-5a05-4f8b-9f3c-5459192e0062-secret-volume\") pod \"6742de02-5a05-4f8b-9f3c-5459192e0062\" (UID: \"6742de02-5a05-4f8b-9f3c-5459192e0062\") " Feb 16 14:15:03 crc kubenswrapper[4812]: I0216 14:15:03.218238 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6742de02-5a05-4f8b-9f3c-5459192e0062-config-volume" (OuterVolumeSpecName: "config-volume") pod "6742de02-5a05-4f8b-9f3c-5459192e0062" (UID: "6742de02-5a05-4f8b-9f3c-5459192e0062"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:15:03 crc kubenswrapper[4812]: I0216 14:15:03.227576 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6742de02-5a05-4f8b-9f3c-5459192e0062-kube-api-access-hr8b8" (OuterVolumeSpecName: "kube-api-access-hr8b8") pod "6742de02-5a05-4f8b-9f3c-5459192e0062" (UID: "6742de02-5a05-4f8b-9f3c-5459192e0062"). InnerVolumeSpecName "kube-api-access-hr8b8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:15:03 crc kubenswrapper[4812]: I0216 14:15:03.229280 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6742de02-5a05-4f8b-9f3c-5459192e0062-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6742de02-5a05-4f8b-9f3c-5459192e0062" (UID: "6742de02-5a05-4f8b-9f3c-5459192e0062"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:15:03 crc kubenswrapper[4812]: I0216 14:15:03.320891 4812 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6742de02-5a05-4f8b-9f3c-5459192e0062-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 14:15:03 crc kubenswrapper[4812]: I0216 14:15:03.320945 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hr8b8\" (UniqueName: \"kubernetes.io/projected/6742de02-5a05-4f8b-9f3c-5459192e0062-kube-api-access-hr8b8\") on node \"crc\" DevicePath \"\"" Feb 16 14:15:03 crc kubenswrapper[4812]: I0216 14:15:03.320958 4812 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6742de02-5a05-4f8b-9f3c-5459192e0062-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 14:15:03 crc kubenswrapper[4812]: I0216 14:15:03.702797 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520855-v75fk" event={"ID":"6742de02-5a05-4f8b-9f3c-5459192e0062","Type":"ContainerDied","Data":"ece8bffe8096f93bb21a41f625c955cba5d6c1fc07fb939e64a2623fe51a7b8c"} Feb 16 14:15:03 crc kubenswrapper[4812]: I0216 14:15:03.703314 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ece8bffe8096f93bb21a41f625c955cba5d6c1fc07fb939e64a2623fe51a7b8c" Feb 16 14:15:03 crc kubenswrapper[4812]: I0216 14:15:03.702861 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520855-v75fk" Feb 16 14:15:03 crc kubenswrapper[4812]: E0216 14:15:03.839343 4812 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6742de02_5a05_4f8b_9f3c_5459192e0062.slice\": RecentStats: unable to find data in memory cache]" Feb 16 14:15:04 crc kubenswrapper[4812]: I0216 14:15:04.210836 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520810-vq6f4"] Feb 16 14:15:04 crc kubenswrapper[4812]: I0216 14:15:04.225460 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520810-vq6f4"] Feb 16 14:15:05 crc kubenswrapper[4812]: I0216 14:15:05.900353 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fca937fd-eef1-4f91-b825-18d5429526a9" path="/var/lib/kubelet/pods/fca937fd-eef1-4f91-b825-18d5429526a9/volumes" Feb 16 14:15:09 crc kubenswrapper[4812]: I0216 14:15:09.103732 4812 scope.go:117] "RemoveContainer" containerID="0358c19526fd9d5115b8ee38021054badf34997819a4529cb16cd73276d636d6" Feb 16 14:15:15 crc kubenswrapper[4812]: E0216 14:15:15.883560 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:15:26 crc kubenswrapper[4812]: I0216 14:15:26.885811 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 14:15:27 crc kubenswrapper[4812]: E0216 14:15:27.023327 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 14:15:27 crc kubenswrapper[4812]: E0216 14:15:27.023416 4812 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 14:15:27 crc kubenswrapper[4812]: E0216 14:15:27.023632 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mq54s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-krnzs_openstack(a7d4eae6-781f-4675-a6c3-ee0f1589c735): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 14:15:27 crc kubenswrapper[4812]: E0216 14:15:27.024955 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:15:37 crc kubenswrapper[4812]: E0216 14:15:37.883294 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:15:52 crc kubenswrapper[4812]: E0216 14:15:52.883246 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:16:03 crc kubenswrapper[4812]: E0216 14:16:03.882638 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:16:15 crc kubenswrapper[4812]: E0216 14:16:15.881877 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:16:26 crc kubenswrapper[4812]: E0216 14:16:26.882837 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:16:39 crc kubenswrapper[4812]: E0216 14:16:39.882069 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:16:52 crc kubenswrapper[4812]: E0216 14:16:52.881799 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:17:07 crc kubenswrapper[4812]: E0216 14:17:07.883078 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:17:14 crc kubenswrapper[4812]: I0216 14:17:14.548891 4812 patch_prober.go:28] interesting pod/machine-config-daemon-c6mn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:17:14 crc kubenswrapper[4812]: I0216 14:17:14.549529 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:17:19 crc kubenswrapper[4812]: E0216 14:17:19.882158 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:17:32 crc kubenswrapper[4812]: E0216 14:17:32.883750 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:17:44 crc kubenswrapper[4812]: I0216 14:17:44.553421 4812 patch_prober.go:28] interesting pod/machine-config-daemon-c6mn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:17:44 crc kubenswrapper[4812]: I0216 14:17:44.554283 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:17:45 crc kubenswrapper[4812]: E0216 14:17:45.884013 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:17:57 crc kubenswrapper[4812]: E0216 14:17:57.884794 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:18:00 crc kubenswrapper[4812]: I0216 14:18:00.699494 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-swvkk"] Feb 16 14:18:00 crc kubenswrapper[4812]: E0216 14:18:00.700422 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6742de02-5a05-4f8b-9f3c-5459192e0062" containerName="collect-profiles" Feb 16 14:18:00 crc kubenswrapper[4812]: I0216 14:18:00.700436 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="6742de02-5a05-4f8b-9f3c-5459192e0062" containerName="collect-profiles" Feb 16 14:18:00 crc kubenswrapper[4812]: I0216 14:18:00.700671 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="6742de02-5a05-4f8b-9f3c-5459192e0062" containerName="collect-profiles" Feb 16 14:18:00 crc kubenswrapper[4812]: I0216 14:18:00.702252 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-swvkk" Feb 16 14:18:00 crc kubenswrapper[4812]: I0216 14:18:00.719028 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-swvkk"] Feb 16 14:18:00 crc kubenswrapper[4812]: I0216 14:18:00.777723 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc22f6d7-ec4f-4a54-8754-a0f85310dcb8-catalog-content\") pod \"redhat-marketplace-swvkk\" (UID: \"cc22f6d7-ec4f-4a54-8754-a0f85310dcb8\") " pod="openshift-marketplace/redhat-marketplace-swvkk" Feb 16 14:18:00 crc kubenswrapper[4812]: I0216 14:18:00.777790 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97zgn\" (UniqueName: \"kubernetes.io/projected/cc22f6d7-ec4f-4a54-8754-a0f85310dcb8-kube-api-access-97zgn\") pod \"redhat-marketplace-swvkk\" (UID: \"cc22f6d7-ec4f-4a54-8754-a0f85310dcb8\") " pod="openshift-marketplace/redhat-marketplace-swvkk" Feb 16 14:18:00 crc kubenswrapper[4812]: I0216 14:18:00.777851 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc22f6d7-ec4f-4a54-8754-a0f85310dcb8-utilities\") pod \"redhat-marketplace-swvkk\" (UID: \"cc22f6d7-ec4f-4a54-8754-a0f85310dcb8\") " pod="openshift-marketplace/redhat-marketplace-swvkk" Feb 16 14:18:00 crc kubenswrapper[4812]: I0216 14:18:00.879908 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc22f6d7-ec4f-4a54-8754-a0f85310dcb8-catalog-content\") pod \"redhat-marketplace-swvkk\" (UID: \"cc22f6d7-ec4f-4a54-8754-a0f85310dcb8\") " pod="openshift-marketplace/redhat-marketplace-swvkk" Feb 16 14:18:00 crc kubenswrapper[4812]: I0216 14:18:00.879973 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97zgn\" (UniqueName: \"kubernetes.io/projected/cc22f6d7-ec4f-4a54-8754-a0f85310dcb8-kube-api-access-97zgn\") pod \"redhat-marketplace-swvkk\" (UID: \"cc22f6d7-ec4f-4a54-8754-a0f85310dcb8\") " pod="openshift-marketplace/redhat-marketplace-swvkk" Feb 16 14:18:00 crc kubenswrapper[4812]: I0216 14:18:00.880013 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc22f6d7-ec4f-4a54-8754-a0f85310dcb8-utilities\") pod \"redhat-marketplace-swvkk\" (UID: \"cc22f6d7-ec4f-4a54-8754-a0f85310dcb8\") " pod="openshift-marketplace/redhat-marketplace-swvkk" Feb 16 14:18:00 crc kubenswrapper[4812]: I0216 14:18:00.881022 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc22f6d7-ec4f-4a54-8754-a0f85310dcb8-utilities\") pod \"redhat-marketplace-swvkk\" (UID: \"cc22f6d7-ec4f-4a54-8754-a0f85310dcb8\") " pod="openshift-marketplace/redhat-marketplace-swvkk" Feb 16 14:18:00 crc kubenswrapper[4812]: I0216 14:18:00.881011 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc22f6d7-ec4f-4a54-8754-a0f85310dcb8-catalog-content\") pod \"redhat-marketplace-swvkk\" (UID: \"cc22f6d7-ec4f-4a54-8754-a0f85310dcb8\") " pod="openshift-marketplace/redhat-marketplace-swvkk" Feb 16 14:18:00 crc kubenswrapper[4812]: I0216 14:18:00.906714 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97zgn\" (UniqueName: \"kubernetes.io/projected/cc22f6d7-ec4f-4a54-8754-a0f85310dcb8-kube-api-access-97zgn\") pod \"redhat-marketplace-swvkk\" (UID: \"cc22f6d7-ec4f-4a54-8754-a0f85310dcb8\") " pod="openshift-marketplace/redhat-marketplace-swvkk" Feb 16 14:18:01 crc kubenswrapper[4812]: I0216 14:18:01.064647 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-swvkk" Feb 16 14:18:01 crc kubenswrapper[4812]: I0216 14:18:01.571721 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-swvkk"] Feb 16 14:18:02 crc kubenswrapper[4812]: I0216 14:18:02.122210 4812 generic.go:334] "Generic (PLEG): container finished" podID="cc22f6d7-ec4f-4a54-8754-a0f85310dcb8" containerID="77ee5875a96e0e577ecaf3146df54e745fc20916fe27f19ff2c481f0e092a7fa" exitCode=0 Feb 16 14:18:02 crc kubenswrapper[4812]: I0216 14:18:02.122355 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-swvkk" event={"ID":"cc22f6d7-ec4f-4a54-8754-a0f85310dcb8","Type":"ContainerDied","Data":"77ee5875a96e0e577ecaf3146df54e745fc20916fe27f19ff2c481f0e092a7fa"} Feb 16 14:18:02 crc kubenswrapper[4812]: I0216 14:18:02.122755 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-swvkk" event={"ID":"cc22f6d7-ec4f-4a54-8754-a0f85310dcb8","Type":"ContainerStarted","Data":"f6fc74330cf77c7db1a991de73eb70e3df405684094781e6e561ca55ecca5379"} Feb 16 14:18:04 crc kubenswrapper[4812]: I0216 14:18:04.361145 4812 generic.go:334] "Generic (PLEG): container finished" podID="cc22f6d7-ec4f-4a54-8754-a0f85310dcb8" containerID="b33c6207a6548a1f7affbc305b3becf11f485383b3e999b9dfe110c66a30125a" exitCode=0 Feb 16 14:18:04 crc kubenswrapper[4812]: I0216 14:18:04.361255 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-swvkk" event={"ID":"cc22f6d7-ec4f-4a54-8754-a0f85310dcb8","Type":"ContainerDied","Data":"b33c6207a6548a1f7affbc305b3becf11f485383b3e999b9dfe110c66a30125a"} Feb 16 14:18:05 crc kubenswrapper[4812]: I0216 14:18:05.377681 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-swvkk" event={"ID":"cc22f6d7-ec4f-4a54-8754-a0f85310dcb8","Type":"ContainerStarted","Data":"9a24651ac02e208f445d4e5afa2d2a517a8d9e0d0ab9b9b646780a5c977c96cf"} Feb 16 14:18:05 crc kubenswrapper[4812]: I0216 14:18:05.410187 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-swvkk" podStartSLOduration=2.732045177 podStartE2EDuration="5.410156535s" podCreationTimestamp="2026-02-16 14:18:00 +0000 UTC" firstStartedPulling="2026-02-16 14:18:02.124748175 +0000 UTC m=+2771.189078876" lastFinishedPulling="2026-02-16 14:18:04.802859533 +0000 UTC m=+2773.867190234" observedRunningTime="2026-02-16 14:18:05.400931792 +0000 UTC m=+2774.465262503" watchObservedRunningTime="2026-02-16 14:18:05.410156535 +0000 UTC m=+2774.474487236" Feb 16 14:18:09 crc kubenswrapper[4812]: E0216 14:18:09.881426 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:18:11 crc kubenswrapper[4812]: I0216 14:18:11.065681 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-swvkk" Feb 16 14:18:11 crc kubenswrapper[4812]: I0216 14:18:11.066190 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-swvkk" Feb 16 14:18:11 crc kubenswrapper[4812]: I0216 14:18:11.122861 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-swvkk" Feb 16 14:18:11 crc kubenswrapper[4812]: I0216 14:18:11.487095 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-swvkk" Feb 16 14:18:11 crc kubenswrapper[4812]: I0216 14:18:11.545356 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-swvkk"] Feb 16 14:18:13 crc kubenswrapper[4812]: I0216 14:18:13.470842 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-swvkk" podUID="cc22f6d7-ec4f-4a54-8754-a0f85310dcb8" containerName="registry-server" containerID="cri-o://9a24651ac02e208f445d4e5afa2d2a517a8d9e0d0ab9b9b646780a5c977c96cf" gracePeriod=2 Feb 16 14:18:14 crc kubenswrapper[4812]: I0216 14:18:14.079817 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-swvkk" Feb 16 14:18:14 crc kubenswrapper[4812]: I0216 14:18:14.480719 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc22f6d7-ec4f-4a54-8754-a0f85310dcb8-catalog-content\") pod \"cc22f6d7-ec4f-4a54-8754-a0f85310dcb8\" (UID: \"cc22f6d7-ec4f-4a54-8754-a0f85310dcb8\") " Feb 16 14:18:14 crc kubenswrapper[4812]: I0216 14:18:14.480795 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc22f6d7-ec4f-4a54-8754-a0f85310dcb8-utilities\") pod \"cc22f6d7-ec4f-4a54-8754-a0f85310dcb8\" (UID: \"cc22f6d7-ec4f-4a54-8754-a0f85310dcb8\") " Feb 16 14:18:14 crc kubenswrapper[4812]: I0216 14:18:14.480896 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97zgn\" (UniqueName: \"kubernetes.io/projected/cc22f6d7-ec4f-4a54-8754-a0f85310dcb8-kube-api-access-97zgn\") pod \"cc22f6d7-ec4f-4a54-8754-a0f85310dcb8\" (UID: \"cc22f6d7-ec4f-4a54-8754-a0f85310dcb8\") " Feb 16 14:18:14 crc kubenswrapper[4812]: I0216 14:18:14.482148 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc22f6d7-ec4f-4a54-8754-a0f85310dcb8-utilities" (OuterVolumeSpecName: "utilities") pod "cc22f6d7-ec4f-4a54-8754-a0f85310dcb8" (UID: "cc22f6d7-ec4f-4a54-8754-a0f85310dcb8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:18:14 crc kubenswrapper[4812]: I0216 14:18:14.487663 4812 generic.go:334] "Generic (PLEG): container finished" podID="cc22f6d7-ec4f-4a54-8754-a0f85310dcb8" containerID="9a24651ac02e208f445d4e5afa2d2a517a8d9e0d0ab9b9b646780a5c977c96cf" exitCode=0 Feb 16 14:18:14 crc kubenswrapper[4812]: I0216 14:18:14.487760 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-swvkk" Feb 16 14:18:14 crc kubenswrapper[4812]: I0216 14:18:14.487753 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-swvkk" event={"ID":"cc22f6d7-ec4f-4a54-8754-a0f85310dcb8","Type":"ContainerDied","Data":"9a24651ac02e208f445d4e5afa2d2a517a8d9e0d0ab9b9b646780a5c977c96cf"} Feb 16 14:18:14 crc kubenswrapper[4812]: I0216 14:18:14.487829 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-swvkk" event={"ID":"cc22f6d7-ec4f-4a54-8754-a0f85310dcb8","Type":"ContainerDied","Data":"f6fc74330cf77c7db1a991de73eb70e3df405684094781e6e561ca55ecca5379"} Feb 16 14:18:14 crc kubenswrapper[4812]: I0216 14:18:14.487854 4812 scope.go:117] "RemoveContainer" containerID="9a24651ac02e208f445d4e5afa2d2a517a8d9e0d0ab9b9b646780a5c977c96cf" Feb 16 14:18:14 crc kubenswrapper[4812]: I0216 14:18:14.489799 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc22f6d7-ec4f-4a54-8754-a0f85310dcb8-kube-api-access-97zgn" (OuterVolumeSpecName: "kube-api-access-97zgn") pod "cc22f6d7-ec4f-4a54-8754-a0f85310dcb8" (UID: "cc22f6d7-ec4f-4a54-8754-a0f85310dcb8"). InnerVolumeSpecName "kube-api-access-97zgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:18:14 crc kubenswrapper[4812]: I0216 14:18:14.534055 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc22f6d7-ec4f-4a54-8754-a0f85310dcb8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc22f6d7-ec4f-4a54-8754-a0f85310dcb8" (UID: "cc22f6d7-ec4f-4a54-8754-a0f85310dcb8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:18:14 crc kubenswrapper[4812]: I0216 14:18:14.548799 4812 patch_prober.go:28] interesting pod/machine-config-daemon-c6mn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:18:14 crc kubenswrapper[4812]: I0216 14:18:14.548886 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:18:14 crc kubenswrapper[4812]: I0216 14:18:14.548953 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" Feb 16 14:18:14 crc kubenswrapper[4812]: I0216 14:18:14.550213 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"50c8740afef41fa58a15dc54138d11cc9c21f246b7407cadf90dca6a16b66a65"} pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 14:18:14 crc kubenswrapper[4812]: I0216 14:18:14.550291 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" containerID="cri-o://50c8740afef41fa58a15dc54138d11cc9c21f246b7407cadf90dca6a16b66a65" gracePeriod=600 Feb 16 14:18:14 crc kubenswrapper[4812]: I0216 14:18:14.568371 4812 scope.go:117] "RemoveContainer" containerID="b33c6207a6548a1f7affbc305b3becf11f485383b3e999b9dfe110c66a30125a" Feb 16 14:18:14 crc kubenswrapper[4812]: I0216 14:18:14.584970 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc22f6d7-ec4f-4a54-8754-a0f85310dcb8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:18:14 crc kubenswrapper[4812]: I0216 14:18:14.585028 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc22f6d7-ec4f-4a54-8754-a0f85310dcb8-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:18:14 crc kubenswrapper[4812]: I0216 14:18:14.585043 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97zgn\" (UniqueName: \"kubernetes.io/projected/cc22f6d7-ec4f-4a54-8754-a0f85310dcb8-kube-api-access-97zgn\") on node \"crc\" DevicePath \"\"" Feb 16 14:18:14 crc kubenswrapper[4812]: I0216 14:18:14.597724 4812 scope.go:117] "RemoveContainer" containerID="77ee5875a96e0e577ecaf3146df54e745fc20916fe27f19ff2c481f0e092a7fa" Feb 16 14:18:14 crc kubenswrapper[4812]: I0216 14:18:14.662738 4812 scope.go:117] "RemoveContainer" containerID="9a24651ac02e208f445d4e5afa2d2a517a8d9e0d0ab9b9b646780a5c977c96cf" Feb 16 14:18:14 crc kubenswrapper[4812]: E0216 14:18:14.669024 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a24651ac02e208f445d4e5afa2d2a517a8d9e0d0ab9b9b646780a5c977c96cf\": container with ID starting with 9a24651ac02e208f445d4e5afa2d2a517a8d9e0d0ab9b9b646780a5c977c96cf not found: ID does not exist" containerID="9a24651ac02e208f445d4e5afa2d2a517a8d9e0d0ab9b9b646780a5c977c96cf" Feb 16 14:18:14 crc kubenswrapper[4812]: I0216 14:18:14.669083 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a24651ac02e208f445d4e5afa2d2a517a8d9e0d0ab9b9b646780a5c977c96cf"} err="failed to get container status \"9a24651ac02e208f445d4e5afa2d2a517a8d9e0d0ab9b9b646780a5c977c96cf\": rpc error: code = NotFound desc = could not find container \"9a24651ac02e208f445d4e5afa2d2a517a8d9e0d0ab9b9b646780a5c977c96cf\": container with ID starting with 9a24651ac02e208f445d4e5afa2d2a517a8d9e0d0ab9b9b646780a5c977c96cf not found: ID does not exist" Feb 16 14:18:14 crc kubenswrapper[4812]: I0216 14:18:14.669124 4812 scope.go:117] "RemoveContainer" containerID="b33c6207a6548a1f7affbc305b3becf11f485383b3e999b9dfe110c66a30125a" Feb 16 14:18:14 crc kubenswrapper[4812]: E0216 14:18:14.670128 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b33c6207a6548a1f7affbc305b3becf11f485383b3e999b9dfe110c66a30125a\": container with ID starting with b33c6207a6548a1f7affbc305b3becf11f485383b3e999b9dfe110c66a30125a not found: ID does not exist" containerID="b33c6207a6548a1f7affbc305b3becf11f485383b3e999b9dfe110c66a30125a" Feb 16 14:18:14 crc kubenswrapper[4812]: I0216 14:18:14.670190 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b33c6207a6548a1f7affbc305b3becf11f485383b3e999b9dfe110c66a30125a"} err="failed to get container status \"b33c6207a6548a1f7affbc305b3becf11f485383b3e999b9dfe110c66a30125a\": rpc error: code = NotFound desc = could not find container \"b33c6207a6548a1f7affbc305b3becf11f485383b3e999b9dfe110c66a30125a\": container with ID starting with b33c6207a6548a1f7affbc305b3becf11f485383b3e999b9dfe110c66a30125a not found: ID does not exist" Feb 16 14:18:14 crc kubenswrapper[4812]: I0216 14:18:14.670238 4812 scope.go:117] "RemoveContainer" containerID="77ee5875a96e0e577ecaf3146df54e745fc20916fe27f19ff2c481f0e092a7fa" Feb 16 14:18:14 crc kubenswrapper[4812]: E0216 14:18:14.670736 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77ee5875a96e0e577ecaf3146df54e745fc20916fe27f19ff2c481f0e092a7fa\": container with ID starting with 77ee5875a96e0e577ecaf3146df54e745fc20916fe27f19ff2c481f0e092a7fa not found: ID does not exist" containerID="77ee5875a96e0e577ecaf3146df54e745fc20916fe27f19ff2c481f0e092a7fa" Feb 16 14:18:14 crc kubenswrapper[4812]: I0216 14:18:14.670773 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77ee5875a96e0e577ecaf3146df54e745fc20916fe27f19ff2c481f0e092a7fa"} err="failed to get container status \"77ee5875a96e0e577ecaf3146df54e745fc20916fe27f19ff2c481f0e092a7fa\": rpc error: code = NotFound desc = could not find container \"77ee5875a96e0e577ecaf3146df54e745fc20916fe27f19ff2c481f0e092a7fa\": container with ID starting with 77ee5875a96e0e577ecaf3146df54e745fc20916fe27f19ff2c481f0e092a7fa not found: ID does not exist" Feb 16 14:18:14 crc kubenswrapper[4812]: I0216 14:18:14.866556 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-swvkk"] Feb 16 14:18:14 crc kubenswrapper[4812]: I0216 14:18:14.877552 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-swvkk"] Feb 16 14:18:15 crc kubenswrapper[4812]: I0216 14:18:15.511015 4812 generic.go:334] "Generic (PLEG): container finished" podID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerID="50c8740afef41fa58a15dc54138d11cc9c21f246b7407cadf90dca6a16b66a65" exitCode=0 Feb 16 14:18:15 crc kubenswrapper[4812]: I0216 14:18:15.511115 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" event={"ID":"3c55e49a-a30d-4950-a690-c33d9f8a31e0","Type":"ContainerDied","Data":"50c8740afef41fa58a15dc54138d11cc9c21f246b7407cadf90dca6a16b66a65"} Feb 16 14:18:15 crc kubenswrapper[4812]: I0216 14:18:15.511582 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" event={"ID":"3c55e49a-a30d-4950-a690-c33d9f8a31e0","Type":"ContainerStarted","Data":"4d303775d5b0eacf35f8cd038a2b3e13d57cfb093693244a2a705750574fe85d"} Feb 16 14:18:15 crc kubenswrapper[4812]: I0216 14:18:15.511648 4812 scope.go:117] "RemoveContainer" containerID="f3cacf320a72106f500d0d2c29eab121eed8c0bdd3f2aa2484da67d29caa2a70" Feb 16 14:18:15 crc kubenswrapper[4812]: I0216 14:18:15.893886 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc22f6d7-ec4f-4a54-8754-a0f85310dcb8" path="/var/lib/kubelet/pods/cc22f6d7-ec4f-4a54-8754-a0f85310dcb8/volumes" Feb 16 14:18:22 crc kubenswrapper[4812]: E0216 14:18:22.883094 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:18:34 crc kubenswrapper[4812]: E0216 14:18:34.881843 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:18:45 crc kubenswrapper[4812]: E0216 14:18:45.882794 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:18:56 crc kubenswrapper[4812]: E0216 14:18:56.882762 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:19:08 crc kubenswrapper[4812]: I0216 14:19:08.283788 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pzcgl"] Feb 16 14:19:08 crc kubenswrapper[4812]: E0216 14:19:08.286819 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc22f6d7-ec4f-4a54-8754-a0f85310dcb8" containerName="extract-content" Feb 16 14:19:08 crc kubenswrapper[4812]: I0216 14:19:08.286979 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc22f6d7-ec4f-4a54-8754-a0f85310dcb8" containerName="extract-content" Feb 16 14:19:08 crc kubenswrapper[4812]: E0216 14:19:08.287074 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc22f6d7-ec4f-4a54-8754-a0f85310dcb8" containerName="registry-server" Feb 16 14:19:08 crc kubenswrapper[4812]: I0216 14:19:08.287148 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc22f6d7-ec4f-4a54-8754-a0f85310dcb8" containerName="registry-server" Feb 16 14:19:08 crc kubenswrapper[4812]: E0216 14:19:08.287254 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc22f6d7-ec4f-4a54-8754-a0f85310dcb8" containerName="extract-utilities" Feb 16 14:19:08 crc kubenswrapper[4812]: I0216 14:19:08.287333 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc22f6d7-ec4f-4a54-8754-a0f85310dcb8" containerName="extract-utilities" Feb 16 14:19:08 crc kubenswrapper[4812]: I0216 14:19:08.287729 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc22f6d7-ec4f-4a54-8754-a0f85310dcb8" containerName="registry-server" Feb 16 14:19:08 crc kubenswrapper[4812]: I0216 14:19:08.289904 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pzcgl" Feb 16 14:19:08 crc kubenswrapper[4812]: I0216 14:19:08.302573 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pzcgl"] Feb 16 14:19:08 crc kubenswrapper[4812]: I0216 14:19:08.402352 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d90eb36-2d8e-4217-885b-62d97da57e7c-utilities\") pod \"community-operators-pzcgl\" (UID: \"2d90eb36-2d8e-4217-885b-62d97da57e7c\") " pod="openshift-marketplace/community-operators-pzcgl" Feb 16 14:19:08 crc kubenswrapper[4812]: I0216 14:19:08.403113 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmk2r\" (UniqueName: \"kubernetes.io/projected/2d90eb36-2d8e-4217-885b-62d97da57e7c-kube-api-access-kmk2r\") pod \"community-operators-pzcgl\" (UID: \"2d90eb36-2d8e-4217-885b-62d97da57e7c\") " pod="openshift-marketplace/community-operators-pzcgl" Feb 16 14:19:08 crc kubenswrapper[4812]: I0216 14:19:08.403489 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d90eb36-2d8e-4217-885b-62d97da57e7c-catalog-content\") pod \"community-operators-pzcgl\" (UID: \"2d90eb36-2d8e-4217-885b-62d97da57e7c\") " pod="openshift-marketplace/community-operators-pzcgl" Feb 16 14:19:08 crc kubenswrapper[4812]: I0216 14:19:08.507004 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmk2r\" (UniqueName: \"kubernetes.io/projected/2d90eb36-2d8e-4217-885b-62d97da57e7c-kube-api-access-kmk2r\") pod \"community-operators-pzcgl\" (UID: \"2d90eb36-2d8e-4217-885b-62d97da57e7c\") " pod="openshift-marketplace/community-operators-pzcgl" Feb 16 14:19:08 crc kubenswrapper[4812]: I0216 14:19:08.507135 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d90eb36-2d8e-4217-885b-62d97da57e7c-catalog-content\") pod \"community-operators-pzcgl\" (UID: \"2d90eb36-2d8e-4217-885b-62d97da57e7c\") " pod="openshift-marketplace/community-operators-pzcgl" Feb 16 14:19:08 crc kubenswrapper[4812]: I0216 14:19:08.507219 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d90eb36-2d8e-4217-885b-62d97da57e7c-utilities\") pod \"community-operators-pzcgl\" (UID: \"2d90eb36-2d8e-4217-885b-62d97da57e7c\") " pod="openshift-marketplace/community-operators-pzcgl" Feb 16 14:19:08 crc kubenswrapper[4812]: I0216 14:19:08.507959 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d90eb36-2d8e-4217-885b-62d97da57e7c-catalog-content\") pod \"community-operators-pzcgl\" (UID: \"2d90eb36-2d8e-4217-885b-62d97da57e7c\") " pod="openshift-marketplace/community-operators-pzcgl" Feb 16 14:19:08 crc kubenswrapper[4812]: I0216 14:19:08.508078 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d90eb36-2d8e-4217-885b-62d97da57e7c-utilities\") pod \"community-operators-pzcgl\" (UID: \"2d90eb36-2d8e-4217-885b-62d97da57e7c\") " pod="openshift-marketplace/community-operators-pzcgl" Feb 16 14:19:08 crc kubenswrapper[4812]: I0216 14:19:08.534167 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmk2r\" (UniqueName: \"kubernetes.io/projected/2d90eb36-2d8e-4217-885b-62d97da57e7c-kube-api-access-kmk2r\") pod \"community-operators-pzcgl\" (UID: \"2d90eb36-2d8e-4217-885b-62d97da57e7c\") " pod="openshift-marketplace/community-operators-pzcgl" Feb 16 14:19:08 crc kubenswrapper[4812]: I0216 14:19:08.613700 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pzcgl" Feb 16 14:19:08 crc kubenswrapper[4812]: E0216 14:19:08.895802 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:19:09 crc kubenswrapper[4812]: I0216 14:19:09.232233 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pzcgl"] Feb 16 14:19:10 crc kubenswrapper[4812]: I0216 14:19:10.129070 4812 generic.go:334] "Generic (PLEG): container finished" podID="2d90eb36-2d8e-4217-885b-62d97da57e7c" containerID="71f75f157766bcc3455b5019eaab01ee4c2caa8590dfdf62f9f60f87d5b24694" exitCode=0 Feb 16 14:19:10 crc kubenswrapper[4812]: I0216 14:19:10.129191 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pzcgl" event={"ID":"2d90eb36-2d8e-4217-885b-62d97da57e7c","Type":"ContainerDied","Data":"71f75f157766bcc3455b5019eaab01ee4c2caa8590dfdf62f9f60f87d5b24694"} Feb 16 14:19:10 crc kubenswrapper[4812]: I0216 14:19:10.129560 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pzcgl" event={"ID":"2d90eb36-2d8e-4217-885b-62d97da57e7c","Type":"ContainerStarted","Data":"587a40b45a9f5438f5060e729998e72f972f51a800a8391d5a0dcb69af24e10b"} Feb 16 14:19:11 crc kubenswrapper[4812]: I0216 14:19:11.141850 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pzcgl" event={"ID":"2d90eb36-2d8e-4217-885b-62d97da57e7c","Type":"ContainerStarted","Data":"68e4bec260c8e5bd0c4959e25cec251d7347325033bb83f522b7c3ee0bf43823"} Feb 16 14:19:12 crc kubenswrapper[4812]: I0216 14:19:12.160203 4812 generic.go:334] "Generic (PLEG): container finished" podID="2d90eb36-2d8e-4217-885b-62d97da57e7c" containerID="68e4bec260c8e5bd0c4959e25cec251d7347325033bb83f522b7c3ee0bf43823" exitCode=0 Feb 16 14:19:12 crc kubenswrapper[4812]: I0216 14:19:12.160289 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pzcgl" event={"ID":"2d90eb36-2d8e-4217-885b-62d97da57e7c","Type":"ContainerDied","Data":"68e4bec260c8e5bd0c4959e25cec251d7347325033bb83f522b7c3ee0bf43823"} Feb 16 14:19:14 crc kubenswrapper[4812]: I0216 14:19:14.188841 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pzcgl" event={"ID":"2d90eb36-2d8e-4217-885b-62d97da57e7c","Type":"ContainerStarted","Data":"e67ffb62368ae67f792e28ed41de7599a0b5533e08b87e8a31e74d08cd4f11e5"} Feb 16 14:19:14 crc kubenswrapper[4812]: I0216 14:19:14.217784 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pzcgl" podStartSLOduration=2.738519401 podStartE2EDuration="6.21776207s" podCreationTimestamp="2026-02-16 14:19:08 +0000 UTC" firstStartedPulling="2026-02-16 14:19:10.132239537 +0000 UTC m=+2839.196570238" lastFinishedPulling="2026-02-16 14:19:13.611482206 +0000 UTC m=+2842.675812907" observedRunningTime="2026-02-16 14:19:14.21042403 +0000 UTC m=+2843.274754731" watchObservedRunningTime="2026-02-16 14:19:14.21776207 +0000 UTC m=+2843.282092771" Feb 16 14:19:18 crc kubenswrapper[4812]: I0216 14:19:18.614386 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pzcgl" Feb 16 14:19:18 crc kubenswrapper[4812]: I0216 14:19:18.615529 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pzcgl" Feb 16 14:19:18 crc kubenswrapper[4812]: I0216 14:19:18.674536 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pzcgl" Feb 16 14:19:19 crc kubenswrapper[4812]: I0216 14:19:19.291242 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pzcgl" Feb 16 14:19:19 crc kubenswrapper[4812]: I0216 14:19:19.348100 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pzcgl"] Feb 16 14:19:20 crc kubenswrapper[4812]: E0216 14:19:20.881829 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:19:21 crc kubenswrapper[4812]: I0216 14:19:21.272065 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pzcgl" podUID="2d90eb36-2d8e-4217-885b-62d97da57e7c" containerName="registry-server" containerID="cri-o://e67ffb62368ae67f792e28ed41de7599a0b5533e08b87e8a31e74d08cd4f11e5" gracePeriod=2 Feb 16 14:19:22 crc kubenswrapper[4812]: I0216 14:19:22.285099 4812 generic.go:334] "Generic (PLEG): container finished" podID="2d90eb36-2d8e-4217-885b-62d97da57e7c" containerID="e67ffb62368ae67f792e28ed41de7599a0b5533e08b87e8a31e74d08cd4f11e5" exitCode=0 Feb 16 14:19:22 crc kubenswrapper[4812]: I0216 14:19:22.285168 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pzcgl" event={"ID":"2d90eb36-2d8e-4217-885b-62d97da57e7c","Type":"ContainerDied","Data":"e67ffb62368ae67f792e28ed41de7599a0b5533e08b87e8a31e74d08cd4f11e5"} Feb 16 14:19:22 crc kubenswrapper[4812]: I0216 14:19:22.973801 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pzcgl" Feb 16 14:19:23 crc kubenswrapper[4812]: I0216 14:19:23.092761 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d90eb36-2d8e-4217-885b-62d97da57e7c-utilities\") pod \"2d90eb36-2d8e-4217-885b-62d97da57e7c\" (UID: \"2d90eb36-2d8e-4217-885b-62d97da57e7c\") " Feb 16 14:19:23 crc kubenswrapper[4812]: I0216 14:19:23.092824 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d90eb36-2d8e-4217-885b-62d97da57e7c-catalog-content\") pod \"2d90eb36-2d8e-4217-885b-62d97da57e7c\" (UID: \"2d90eb36-2d8e-4217-885b-62d97da57e7c\") " Feb 16 14:19:23 crc kubenswrapper[4812]: I0216 14:19:23.092987 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmk2r\" (UniqueName: \"kubernetes.io/projected/2d90eb36-2d8e-4217-885b-62d97da57e7c-kube-api-access-kmk2r\") pod \"2d90eb36-2d8e-4217-885b-62d97da57e7c\" (UID: \"2d90eb36-2d8e-4217-885b-62d97da57e7c\") " Feb 16 14:19:23 crc kubenswrapper[4812]: I0216 14:19:23.093838 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d90eb36-2d8e-4217-885b-62d97da57e7c-utilities" (OuterVolumeSpecName: "utilities") pod "2d90eb36-2d8e-4217-885b-62d97da57e7c" (UID: "2d90eb36-2d8e-4217-885b-62d97da57e7c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:19:23 crc kubenswrapper[4812]: I0216 14:19:23.100840 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d90eb36-2d8e-4217-885b-62d97da57e7c-kube-api-access-kmk2r" (OuterVolumeSpecName: "kube-api-access-kmk2r") pod "2d90eb36-2d8e-4217-885b-62d97da57e7c" (UID: "2d90eb36-2d8e-4217-885b-62d97da57e7c"). InnerVolumeSpecName "kube-api-access-kmk2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:19:23 crc kubenswrapper[4812]: I0216 14:19:23.145757 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d90eb36-2d8e-4217-885b-62d97da57e7c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d90eb36-2d8e-4217-885b-62d97da57e7c" (UID: "2d90eb36-2d8e-4217-885b-62d97da57e7c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:19:23 crc kubenswrapper[4812]: I0216 14:19:23.196080 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d90eb36-2d8e-4217-885b-62d97da57e7c-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:19:23 crc kubenswrapper[4812]: I0216 14:19:23.196144 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d90eb36-2d8e-4217-885b-62d97da57e7c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:19:23 crc kubenswrapper[4812]: I0216 14:19:23.196171 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmk2r\" (UniqueName: \"kubernetes.io/projected/2d90eb36-2d8e-4217-885b-62d97da57e7c-kube-api-access-kmk2r\") on node \"crc\" DevicePath \"\"" Feb 16 14:19:23 crc kubenswrapper[4812]: I0216 14:19:23.301228 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pzcgl" event={"ID":"2d90eb36-2d8e-4217-885b-62d97da57e7c","Type":"ContainerDied","Data":"587a40b45a9f5438f5060e729998e72f972f51a800a8391d5a0dcb69af24e10b"} Feb 16 14:19:23 crc kubenswrapper[4812]: I0216 14:19:23.303235 4812 scope.go:117] "RemoveContainer" containerID="e67ffb62368ae67f792e28ed41de7599a0b5533e08b87e8a31e74d08cd4f11e5" Feb 16 14:19:23 crc kubenswrapper[4812]: I0216 14:19:23.301340 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pzcgl" Feb 16 14:19:23 crc kubenswrapper[4812]: I0216 14:19:23.332222 4812 scope.go:117] "RemoveContainer" containerID="68e4bec260c8e5bd0c4959e25cec251d7347325033bb83f522b7c3ee0bf43823" Feb 16 14:19:23 crc kubenswrapper[4812]: I0216 14:19:23.350935 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pzcgl"] Feb 16 14:19:23 crc kubenswrapper[4812]: I0216 14:19:23.360679 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pzcgl"] Feb 16 14:19:23 crc kubenswrapper[4812]: I0216 14:19:23.368557 4812 scope.go:117] "RemoveContainer" containerID="71f75f157766bcc3455b5019eaab01ee4c2caa8590dfdf62f9f60f87d5b24694" Feb 16 14:19:23 crc kubenswrapper[4812]: I0216 14:19:23.894794 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d90eb36-2d8e-4217-885b-62d97da57e7c" path="/var/lib/kubelet/pods/2d90eb36-2d8e-4217-885b-62d97da57e7c/volumes" Feb 16 14:19:34 crc kubenswrapper[4812]: E0216 14:19:34.883516 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:19:49 crc kubenswrapper[4812]: E0216 14:19:49.882320 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:20:04 crc kubenswrapper[4812]: E0216 14:20:04.882387 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:20:14 crc kubenswrapper[4812]: I0216 14:20:14.548893 4812 patch_prober.go:28] interesting pod/machine-config-daemon-c6mn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:20:14 crc kubenswrapper[4812]: I0216 14:20:14.549692 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:20:16 crc kubenswrapper[4812]: E0216 14:20:16.882969 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:20:27 crc kubenswrapper[4812]: I0216 14:20:27.884808 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 14:20:28 crc kubenswrapper[4812]: E0216 14:20:28.013091 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 14:20:28 crc kubenswrapper[4812]: E0216 14:20:28.013193 4812 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 14:20:28 crc kubenswrapper[4812]: E0216 14:20:28.013409 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mq54s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-krnzs_openstack(a7d4eae6-781f-4675-a6c3-ee0f1589c735): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 14:20:28 crc kubenswrapper[4812]: E0216 14:20:28.014744 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:20:35 crc kubenswrapper[4812]: I0216 14:20:35.680167 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tckp7"] Feb 16 14:20:35 crc kubenswrapper[4812]: E0216 14:20:35.681627 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d90eb36-2d8e-4217-885b-62d97da57e7c" containerName="extract-content" Feb 16 14:20:35 crc kubenswrapper[4812]: I0216 14:20:35.681646 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d90eb36-2d8e-4217-885b-62d97da57e7c" containerName="extract-content" Feb 16 14:20:35 crc kubenswrapper[4812]: E0216 14:20:35.681688 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d90eb36-2d8e-4217-885b-62d97da57e7c" containerName="registry-server" Feb 16 14:20:35 crc kubenswrapper[4812]: I0216 14:20:35.681695 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d90eb36-2d8e-4217-885b-62d97da57e7c" containerName="registry-server" Feb 16 14:20:35 crc kubenswrapper[4812]: E0216 14:20:35.681712 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d90eb36-2d8e-4217-885b-62d97da57e7c" containerName="extract-utilities" Feb 16 14:20:35 crc kubenswrapper[4812]: I0216 14:20:35.681719 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d90eb36-2d8e-4217-885b-62d97da57e7c" containerName="extract-utilities" Feb 16 14:20:35 crc kubenswrapper[4812]: I0216 14:20:35.682014 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d90eb36-2d8e-4217-885b-62d97da57e7c" containerName="registry-server" Feb 16 14:20:35 crc kubenswrapper[4812]: I0216 14:20:35.685089 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tckp7" Feb 16 14:20:35 crc kubenswrapper[4812]: I0216 14:20:35.707152 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tckp7"] Feb 16 14:20:35 crc kubenswrapper[4812]: I0216 14:20:35.777901 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpb8l\" (UniqueName: \"kubernetes.io/projected/a7a0204b-b7e7-4e06-98f0-192f694d2b40-kube-api-access-kpb8l\") pod \"redhat-operators-tckp7\" (UID: \"a7a0204b-b7e7-4e06-98f0-192f694d2b40\") " pod="openshift-marketplace/redhat-operators-tckp7" Feb 16 14:20:35 crc kubenswrapper[4812]: I0216 14:20:35.778036 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7a0204b-b7e7-4e06-98f0-192f694d2b40-utilities\") pod \"redhat-operators-tckp7\" (UID: \"a7a0204b-b7e7-4e06-98f0-192f694d2b40\") " pod="openshift-marketplace/redhat-operators-tckp7" Feb 16 14:20:35 crc kubenswrapper[4812]: I0216 14:20:35.778099 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7a0204b-b7e7-4e06-98f0-192f694d2b40-catalog-content\") pod \"redhat-operators-tckp7\" (UID: \"a7a0204b-b7e7-4e06-98f0-192f694d2b40\") " pod="openshift-marketplace/redhat-operators-tckp7" Feb 16 14:20:35 crc kubenswrapper[4812]: I0216 14:20:35.880673 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpb8l\" (UniqueName: \"kubernetes.io/projected/a7a0204b-b7e7-4e06-98f0-192f694d2b40-kube-api-access-kpb8l\") pod \"redhat-operators-tckp7\" (UID: \"a7a0204b-b7e7-4e06-98f0-192f694d2b40\") " pod="openshift-marketplace/redhat-operators-tckp7" Feb 16 14:20:35 crc kubenswrapper[4812]: I0216 14:20:35.880791 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7a0204b-b7e7-4e06-98f0-192f694d2b40-utilities\") pod \"redhat-operators-tckp7\" (UID: \"a7a0204b-b7e7-4e06-98f0-192f694d2b40\") " pod="openshift-marketplace/redhat-operators-tckp7" Feb 16 14:20:35 crc kubenswrapper[4812]: I0216 14:20:35.880841 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7a0204b-b7e7-4e06-98f0-192f694d2b40-catalog-content\") pod \"redhat-operators-tckp7\" (UID: \"a7a0204b-b7e7-4e06-98f0-192f694d2b40\") " pod="openshift-marketplace/redhat-operators-tckp7" Feb 16 14:20:35 crc kubenswrapper[4812]: I0216 14:20:35.881439 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7a0204b-b7e7-4e06-98f0-192f694d2b40-utilities\") pod \"redhat-operators-tckp7\" (UID: \"a7a0204b-b7e7-4e06-98f0-192f694d2b40\") " pod="openshift-marketplace/redhat-operators-tckp7" Feb 16 14:20:35 crc kubenswrapper[4812]: I0216 14:20:35.881654 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7a0204b-b7e7-4e06-98f0-192f694d2b40-catalog-content\") pod \"redhat-operators-tckp7\" (UID: \"a7a0204b-b7e7-4e06-98f0-192f694d2b40\") " pod="openshift-marketplace/redhat-operators-tckp7" Feb 16 14:20:35 crc kubenswrapper[4812]: I0216 14:20:35.907029 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpb8l\" (UniqueName: \"kubernetes.io/projected/a7a0204b-b7e7-4e06-98f0-192f694d2b40-kube-api-access-kpb8l\") pod \"redhat-operators-tckp7\" (UID: \"a7a0204b-b7e7-4e06-98f0-192f694d2b40\") " pod="openshift-marketplace/redhat-operators-tckp7" Feb 16 14:20:36 crc kubenswrapper[4812]: I0216 14:20:36.013155 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tckp7" Feb 16 14:20:36 crc kubenswrapper[4812]: I0216 14:20:36.513840 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tckp7"] Feb 16 14:20:37 crc kubenswrapper[4812]: I0216 14:20:37.189156 4812 generic.go:334] "Generic (PLEG): container finished" podID="a7a0204b-b7e7-4e06-98f0-192f694d2b40" containerID="50690f0e513a7e7f8515f73c5ff621c6d8c03972e541574979e83db4ac33f2e8" exitCode=0 Feb 16 14:20:37 crc kubenswrapper[4812]: I0216 14:20:37.189553 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tckp7" event={"ID":"a7a0204b-b7e7-4e06-98f0-192f694d2b40","Type":"ContainerDied","Data":"50690f0e513a7e7f8515f73c5ff621c6d8c03972e541574979e83db4ac33f2e8"} Feb 16 14:20:37 crc kubenswrapper[4812]: I0216 14:20:37.189587 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tckp7" event={"ID":"a7a0204b-b7e7-4e06-98f0-192f694d2b40","Type":"ContainerStarted","Data":"65e62460ae18e92c6105e62577307ec5a70e903dc9399c4b1032f7cee7ba60bc"} Feb 16 14:20:38 crc kubenswrapper[4812]: I0216 14:20:38.205965 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tckp7" event={"ID":"a7a0204b-b7e7-4e06-98f0-192f694d2b40","Type":"ContainerStarted","Data":"e0e98c58399e96ba47038db808459e83faf61de3ecbf21679c3e56befb13ed70"} Feb 16 14:20:41 crc kubenswrapper[4812]: I0216 14:20:41.242218 4812 generic.go:334] "Generic (PLEG): container finished" podID="a7a0204b-b7e7-4e06-98f0-192f694d2b40" containerID="e0e98c58399e96ba47038db808459e83faf61de3ecbf21679c3e56befb13ed70" exitCode=0 Feb 16 14:20:41 crc kubenswrapper[4812]: I0216 14:20:41.242335 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tckp7" event={"ID":"a7a0204b-b7e7-4e06-98f0-192f694d2b40","Type":"ContainerDied","Data":"e0e98c58399e96ba47038db808459e83faf61de3ecbf21679c3e56befb13ed70"} Feb 16 14:20:42 crc kubenswrapper[4812]: I0216 14:20:42.259395 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tckp7" event={"ID":"a7a0204b-b7e7-4e06-98f0-192f694d2b40","Type":"ContainerStarted","Data":"3d738ddd4f63aded303d718a6bc325f8f5901f6b403ba10eca13b6e03e5faea1"} Feb 16 14:20:42 crc kubenswrapper[4812]: I0216 14:20:42.301196 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tckp7" podStartSLOduration=2.858602681 podStartE2EDuration="7.301171147s" podCreationTimestamp="2026-02-16 14:20:35 +0000 UTC" firstStartedPulling="2026-02-16 14:20:37.191583365 +0000 UTC m=+2926.255914066" lastFinishedPulling="2026-02-16 14:20:41.634151831 +0000 UTC m=+2930.698482532" observedRunningTime="2026-02-16 14:20:42.296996908 +0000 UTC m=+2931.361327609" watchObservedRunningTime="2026-02-16 14:20:42.301171147 +0000 UTC m=+2931.365501848" Feb 16 14:20:42 crc kubenswrapper[4812]: E0216 14:20:42.880919 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:20:44 crc kubenswrapper[4812]: I0216 14:20:44.549365 4812 patch_prober.go:28] interesting pod/machine-config-daemon-c6mn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:20:44 crc kubenswrapper[4812]: I0216 14:20:44.549903 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:20:46 crc kubenswrapper[4812]: I0216 14:20:46.013913 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tckp7" Feb 16 14:20:46 crc kubenswrapper[4812]: I0216 14:20:46.014491 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tckp7" Feb 16 14:20:47 crc kubenswrapper[4812]: I0216 14:20:47.066125 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tckp7" podUID="a7a0204b-b7e7-4e06-98f0-192f694d2b40" containerName="registry-server" probeResult="failure" output=< Feb 16 14:20:47 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Feb 16 14:20:47 crc kubenswrapper[4812]: > Feb 16 14:20:55 crc kubenswrapper[4812]: E0216 14:20:55.886104 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:20:56 crc kubenswrapper[4812]: I0216 14:20:56.063775 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tckp7" Feb 16 14:20:56 crc kubenswrapper[4812]: I0216 14:20:56.119418 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tckp7" Feb 16 14:20:56 crc kubenswrapper[4812]: I0216 14:20:56.310031 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tckp7"] Feb 16 14:20:57 crc kubenswrapper[4812]: I0216 14:20:57.800883 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tckp7" podUID="a7a0204b-b7e7-4e06-98f0-192f694d2b40" containerName="registry-server" containerID="cri-o://3d738ddd4f63aded303d718a6bc325f8f5901f6b403ba10eca13b6e03e5faea1" gracePeriod=2 Feb 16 14:20:58 crc kubenswrapper[4812]: I0216 14:20:58.631362 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tckp7" Feb 16 14:20:58 crc kubenswrapper[4812]: I0216 14:20:58.648558 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7a0204b-b7e7-4e06-98f0-192f694d2b40-utilities\") pod \"a7a0204b-b7e7-4e06-98f0-192f694d2b40\" (UID: \"a7a0204b-b7e7-4e06-98f0-192f694d2b40\") " Feb 16 14:20:58 crc kubenswrapper[4812]: I0216 14:20:58.648781 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpb8l\" (UniqueName: \"kubernetes.io/projected/a7a0204b-b7e7-4e06-98f0-192f694d2b40-kube-api-access-kpb8l\") pod \"a7a0204b-b7e7-4e06-98f0-192f694d2b40\" (UID: \"a7a0204b-b7e7-4e06-98f0-192f694d2b40\") " Feb 16 14:20:58 crc kubenswrapper[4812]: I0216 14:20:58.649174 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7a0204b-b7e7-4e06-98f0-192f694d2b40-catalog-content\") pod \"a7a0204b-b7e7-4e06-98f0-192f694d2b40\" (UID: \"a7a0204b-b7e7-4e06-98f0-192f694d2b40\") " Feb 16 14:20:58 crc kubenswrapper[4812]: I0216 14:20:58.649790 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a0204b-b7e7-4e06-98f0-192f694d2b40-utilities" (OuterVolumeSpecName: "utilities") pod "a7a0204b-b7e7-4e06-98f0-192f694d2b40" (UID: "a7a0204b-b7e7-4e06-98f0-192f694d2b40"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:20:58 crc kubenswrapper[4812]: I0216 14:20:58.650560 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7a0204b-b7e7-4e06-98f0-192f694d2b40-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:20:58 crc kubenswrapper[4812]: I0216 14:20:58.659802 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a0204b-b7e7-4e06-98f0-192f694d2b40-kube-api-access-kpb8l" (OuterVolumeSpecName: "kube-api-access-kpb8l") pod "a7a0204b-b7e7-4e06-98f0-192f694d2b40" (UID: "a7a0204b-b7e7-4e06-98f0-192f694d2b40"). InnerVolumeSpecName "kube-api-access-kpb8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:20:58 crc kubenswrapper[4812]: I0216 14:20:58.753155 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpb8l\" (UniqueName: \"kubernetes.io/projected/a7a0204b-b7e7-4e06-98f0-192f694d2b40-kube-api-access-kpb8l\") on node \"crc\" DevicePath \"\"" Feb 16 14:20:58 crc kubenswrapper[4812]: I0216 14:20:58.809930 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a0204b-b7e7-4e06-98f0-192f694d2b40-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a7a0204b-b7e7-4e06-98f0-192f694d2b40" (UID: "a7a0204b-b7e7-4e06-98f0-192f694d2b40"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:20:58 crc kubenswrapper[4812]: I0216 14:20:58.821348 4812 generic.go:334] "Generic (PLEG): container finished" podID="a7a0204b-b7e7-4e06-98f0-192f694d2b40" containerID="3d738ddd4f63aded303d718a6bc325f8f5901f6b403ba10eca13b6e03e5faea1" exitCode=0 Feb 16 14:20:58 crc kubenswrapper[4812]: I0216 14:20:58.821406 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tckp7" event={"ID":"a7a0204b-b7e7-4e06-98f0-192f694d2b40","Type":"ContainerDied","Data":"3d738ddd4f63aded303d718a6bc325f8f5901f6b403ba10eca13b6e03e5faea1"} Feb 16 14:20:58 crc kubenswrapper[4812]: I0216 14:20:58.821489 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tckp7" event={"ID":"a7a0204b-b7e7-4e06-98f0-192f694d2b40","Type":"ContainerDied","Data":"65e62460ae18e92c6105e62577307ec5a70e903dc9399c4b1032f7cee7ba60bc"} Feb 16 14:20:58 crc kubenswrapper[4812]: I0216 14:20:58.821484 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tckp7" Feb 16 14:20:58 crc kubenswrapper[4812]: I0216 14:20:58.821574 4812 scope.go:117] "RemoveContainer" containerID="3d738ddd4f63aded303d718a6bc325f8f5901f6b403ba10eca13b6e03e5faea1" Feb 16 14:20:58 crc kubenswrapper[4812]: I0216 14:20:58.855200 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7a0204b-b7e7-4e06-98f0-192f694d2b40-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:20:58 crc kubenswrapper[4812]: I0216 14:20:58.861684 4812 scope.go:117] "RemoveContainer" containerID="e0e98c58399e96ba47038db808459e83faf61de3ecbf21679c3e56befb13ed70" Feb 16 14:20:58 crc kubenswrapper[4812]: I0216 14:20:58.873824 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tckp7"] Feb 16 14:20:58 crc kubenswrapper[4812]: I0216 14:20:58.883159 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tckp7"] Feb 16 14:20:58 crc kubenswrapper[4812]: I0216 14:20:58.897277 4812 scope.go:117] "RemoveContainer" containerID="50690f0e513a7e7f8515f73c5ff621c6d8c03972e541574979e83db4ac33f2e8" Feb 16 14:20:58 crc kubenswrapper[4812]: I0216 14:20:58.945462 4812 scope.go:117] "RemoveContainer" containerID="3d738ddd4f63aded303d718a6bc325f8f5901f6b403ba10eca13b6e03e5faea1" Feb 16 14:20:58 crc kubenswrapper[4812]: E0216 14:20:58.945990 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d738ddd4f63aded303d718a6bc325f8f5901f6b403ba10eca13b6e03e5faea1\": container with ID starting with 3d738ddd4f63aded303d718a6bc325f8f5901f6b403ba10eca13b6e03e5faea1 not found: ID does not exist" containerID="3d738ddd4f63aded303d718a6bc325f8f5901f6b403ba10eca13b6e03e5faea1" Feb 16 14:20:58 crc kubenswrapper[4812]: I0216 14:20:58.946028 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d738ddd4f63aded303d718a6bc325f8f5901f6b403ba10eca13b6e03e5faea1"} err="failed to get container status \"3d738ddd4f63aded303d718a6bc325f8f5901f6b403ba10eca13b6e03e5faea1\": rpc error: code = NotFound desc = could not find container \"3d738ddd4f63aded303d718a6bc325f8f5901f6b403ba10eca13b6e03e5faea1\": container with ID starting with 3d738ddd4f63aded303d718a6bc325f8f5901f6b403ba10eca13b6e03e5faea1 not found: ID does not exist" Feb 16 14:20:58 crc kubenswrapper[4812]: I0216 14:20:58.946059 4812 scope.go:117] "RemoveContainer" containerID="e0e98c58399e96ba47038db808459e83faf61de3ecbf21679c3e56befb13ed70" Feb 16 14:20:58 crc kubenswrapper[4812]: E0216 14:20:58.946490 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0e98c58399e96ba47038db808459e83faf61de3ecbf21679c3e56befb13ed70\": container with ID starting with e0e98c58399e96ba47038db808459e83faf61de3ecbf21679c3e56befb13ed70 not found: ID does not exist" containerID="e0e98c58399e96ba47038db808459e83faf61de3ecbf21679c3e56befb13ed70" Feb 16 14:20:58 crc kubenswrapper[4812]: I0216 14:20:58.946523 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0e98c58399e96ba47038db808459e83faf61de3ecbf21679c3e56befb13ed70"} err="failed to get container status \"e0e98c58399e96ba47038db808459e83faf61de3ecbf21679c3e56befb13ed70\": rpc error: code = NotFound desc = could not find container \"e0e98c58399e96ba47038db808459e83faf61de3ecbf21679c3e56befb13ed70\": container with ID starting with e0e98c58399e96ba47038db808459e83faf61de3ecbf21679c3e56befb13ed70 not found: ID does not exist" Feb 16 14:20:58 crc kubenswrapper[4812]: I0216 14:20:58.946542 4812 scope.go:117] "RemoveContainer" containerID="50690f0e513a7e7f8515f73c5ff621c6d8c03972e541574979e83db4ac33f2e8" Feb 16 14:20:58 crc kubenswrapper[4812]: E0216 14:20:58.946787 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50690f0e513a7e7f8515f73c5ff621c6d8c03972e541574979e83db4ac33f2e8\": container with ID starting with 50690f0e513a7e7f8515f73c5ff621c6d8c03972e541574979e83db4ac33f2e8 not found: ID does not exist" containerID="50690f0e513a7e7f8515f73c5ff621c6d8c03972e541574979e83db4ac33f2e8" Feb 16 14:20:58 crc kubenswrapper[4812]: I0216 14:20:58.946816 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50690f0e513a7e7f8515f73c5ff621c6d8c03972e541574979e83db4ac33f2e8"} err="failed to get container status \"50690f0e513a7e7f8515f73c5ff621c6d8c03972e541574979e83db4ac33f2e8\": rpc error: code = NotFound desc = could not find container \"50690f0e513a7e7f8515f73c5ff621c6d8c03972e541574979e83db4ac33f2e8\": container with ID starting with 50690f0e513a7e7f8515f73c5ff621c6d8c03972e541574979e83db4ac33f2e8 not found: ID does not exist" Feb 16 14:20:59 crc kubenswrapper[4812]: I0216 14:20:59.905634 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a0204b-b7e7-4e06-98f0-192f694d2b40" path="/var/lib/kubelet/pods/a7a0204b-b7e7-4e06-98f0-192f694d2b40/volumes" Feb 16 14:21:09 crc kubenswrapper[4812]: E0216 14:21:09.882516 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:21:14 crc kubenswrapper[4812]: I0216 14:21:14.549306 4812 patch_prober.go:28] interesting pod/machine-config-daemon-c6mn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:21:14 crc kubenswrapper[4812]: I0216 14:21:14.549863 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:21:14 crc kubenswrapper[4812]: I0216 14:21:14.549935 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" Feb 16 14:21:14 crc kubenswrapper[4812]: I0216 14:21:14.551041 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4d303775d5b0eacf35f8cd038a2b3e13d57cfb093693244a2a705750574fe85d"} pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 14:21:14 crc kubenswrapper[4812]: I0216 14:21:14.551104 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" containerID="cri-o://4d303775d5b0eacf35f8cd038a2b3e13d57cfb093693244a2a705750574fe85d" gracePeriod=600 Feb 16 14:21:14 crc kubenswrapper[4812]: E0216 14:21:14.692880 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:21:15 crc kubenswrapper[4812]: I0216 14:21:15.148798 4812 generic.go:334] "Generic (PLEG): container finished" podID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerID="4d303775d5b0eacf35f8cd038a2b3e13d57cfb093693244a2a705750574fe85d" exitCode=0 Feb 16 14:21:15 crc kubenswrapper[4812]: I0216 14:21:15.148882 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" event={"ID":"3c55e49a-a30d-4950-a690-c33d9f8a31e0","Type":"ContainerDied","Data":"4d303775d5b0eacf35f8cd038a2b3e13d57cfb093693244a2a705750574fe85d"} Feb 16 14:21:15 crc kubenswrapper[4812]: I0216 14:21:15.148939 4812 scope.go:117] "RemoveContainer" containerID="50c8740afef41fa58a15dc54138d11cc9c21f246b7407cadf90dca6a16b66a65" Feb 16 14:21:15 crc kubenswrapper[4812]: I0216 14:21:15.149949 4812 scope.go:117] "RemoveContainer" containerID="4d303775d5b0eacf35f8cd038a2b3e13d57cfb093693244a2a705750574fe85d" Feb 16 14:21:15 crc kubenswrapper[4812]: E0216 14:21:15.150415 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:21:24 crc kubenswrapper[4812]: E0216 14:21:24.882731 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:21:28 crc kubenswrapper[4812]: I0216 14:21:28.879941 4812 scope.go:117] "RemoveContainer" containerID="4d303775d5b0eacf35f8cd038a2b3e13d57cfb093693244a2a705750574fe85d" Feb 16 14:21:28 crc kubenswrapper[4812]: E0216 14:21:28.881125 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:21:37 crc kubenswrapper[4812]: E0216 14:21:37.884484 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:21:43 crc kubenswrapper[4812]: I0216 14:21:43.880030 4812 scope.go:117] "RemoveContainer" containerID="4d303775d5b0eacf35f8cd038a2b3e13d57cfb093693244a2a705750574fe85d" Feb 16 14:21:43 crc kubenswrapper[4812]: E0216 14:21:43.881631 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:21:51 crc kubenswrapper[4812]: E0216 14:21:51.897705 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:21:58 crc kubenswrapper[4812]: I0216 14:21:58.879246 4812 scope.go:117] "RemoveContainer" containerID="4d303775d5b0eacf35f8cd038a2b3e13d57cfb093693244a2a705750574fe85d" Feb 16 14:21:58 crc kubenswrapper[4812]: E0216 14:21:58.880706 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:22:06 crc kubenswrapper[4812]: E0216 14:22:06.884408 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:22:12 crc kubenswrapper[4812]: I0216 14:22:12.879718 4812 scope.go:117] "RemoveContainer" containerID="4d303775d5b0eacf35f8cd038a2b3e13d57cfb093693244a2a705750574fe85d" Feb 16 14:22:12 crc kubenswrapper[4812]: E0216 14:22:12.880291 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:22:17 crc kubenswrapper[4812]: E0216 14:22:17.885453 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:22:23 crc kubenswrapper[4812]: I0216 14:22:23.880390 4812 scope.go:117] "RemoveContainer" containerID="4d303775d5b0eacf35f8cd038a2b3e13d57cfb093693244a2a705750574fe85d" Feb 16 14:22:23 crc kubenswrapper[4812]: E0216 14:22:23.881257 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:22:30 crc kubenswrapper[4812]: E0216 14:22:30.884276 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:22:37 crc kubenswrapper[4812]: I0216 14:22:37.879437 4812 scope.go:117] "RemoveContainer" containerID="4d303775d5b0eacf35f8cd038a2b3e13d57cfb093693244a2a705750574fe85d" Feb 16 14:22:37 crc kubenswrapper[4812]: E0216 14:22:37.881163 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:22:41 crc kubenswrapper[4812]: E0216 14:22:41.905137 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:22:44 crc kubenswrapper[4812]: I0216 14:22:44.915755 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ndlqc/must-gather-drdhp"] Feb 16 14:22:44 crc kubenswrapper[4812]: E0216 14:22:44.916877 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7a0204b-b7e7-4e06-98f0-192f694d2b40" containerName="registry-server" Feb 16 14:22:44 crc kubenswrapper[4812]: I0216 14:22:44.916891 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7a0204b-b7e7-4e06-98f0-192f694d2b40" containerName="registry-server" Feb 16 14:22:44 crc kubenswrapper[4812]: E0216 14:22:44.916920 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7a0204b-b7e7-4e06-98f0-192f694d2b40" containerName="extract-content" Feb 16 14:22:44 crc kubenswrapper[4812]: I0216 14:22:44.916926 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7a0204b-b7e7-4e06-98f0-192f694d2b40" containerName="extract-content" Feb 16 14:22:44 crc kubenswrapper[4812]: E0216 14:22:44.916944 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7a0204b-b7e7-4e06-98f0-192f694d2b40" containerName="extract-utilities" Feb 16 14:22:44 crc kubenswrapper[4812]: I0216 14:22:44.916952 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7a0204b-b7e7-4e06-98f0-192f694d2b40" containerName="extract-utilities" Feb 16 14:22:44 crc kubenswrapper[4812]: I0216 14:22:44.917184 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7a0204b-b7e7-4e06-98f0-192f694d2b40" containerName="registry-server" Feb 16 14:22:44 crc kubenswrapper[4812]: I0216 14:22:44.918467 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ndlqc/must-gather-drdhp" Feb 16 14:22:44 crc kubenswrapper[4812]: I0216 14:22:44.923315 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-ndlqc"/"default-dockercfg-ttbtf" Feb 16 14:22:44 crc kubenswrapper[4812]: I0216 14:22:44.924752 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ndlqc"/"kube-root-ca.crt" Feb 16 14:22:44 crc kubenswrapper[4812]: I0216 14:22:44.933199 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ndlqc"/"openshift-service-ca.crt" Feb 16 14:22:44 crc kubenswrapper[4812]: I0216 14:22:44.961766 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ndlqc/must-gather-drdhp"] Feb 16 14:22:45 crc kubenswrapper[4812]: I0216 14:22:45.058849 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56f9c\" (UniqueName: \"kubernetes.io/projected/c8483932-ac35-4fb8-a807-b4d899788c4c-kube-api-access-56f9c\") pod \"must-gather-drdhp\" (UID: \"c8483932-ac35-4fb8-a807-b4d899788c4c\") " pod="openshift-must-gather-ndlqc/must-gather-drdhp" Feb 16 14:22:45 crc kubenswrapper[4812]: I0216 14:22:45.058978 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c8483932-ac35-4fb8-a807-b4d899788c4c-must-gather-output\") pod \"must-gather-drdhp\" (UID: \"c8483932-ac35-4fb8-a807-b4d899788c4c\") " pod="openshift-must-gather-ndlqc/must-gather-drdhp" Feb 16 14:22:45 crc kubenswrapper[4812]: I0216 14:22:45.160598 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56f9c\" (UniqueName: \"kubernetes.io/projected/c8483932-ac35-4fb8-a807-b4d899788c4c-kube-api-access-56f9c\") pod \"must-gather-drdhp\" (UID: \"c8483932-ac35-4fb8-a807-b4d899788c4c\") " pod="openshift-must-gather-ndlqc/must-gather-drdhp" Feb 16 14:22:45 crc kubenswrapper[4812]: I0216 14:22:45.160693 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c8483932-ac35-4fb8-a807-b4d899788c4c-must-gather-output\") pod \"must-gather-drdhp\" (UID: \"c8483932-ac35-4fb8-a807-b4d899788c4c\") " pod="openshift-must-gather-ndlqc/must-gather-drdhp" Feb 16 14:22:45 crc kubenswrapper[4812]: I0216 14:22:45.161296 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c8483932-ac35-4fb8-a807-b4d899788c4c-must-gather-output\") pod \"must-gather-drdhp\" (UID: \"c8483932-ac35-4fb8-a807-b4d899788c4c\") " pod="openshift-must-gather-ndlqc/must-gather-drdhp" Feb 16 14:22:45 crc kubenswrapper[4812]: I0216 14:22:45.225532 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56f9c\" (UniqueName: \"kubernetes.io/projected/c8483932-ac35-4fb8-a807-b4d899788c4c-kube-api-access-56f9c\") pod \"must-gather-drdhp\" (UID: \"c8483932-ac35-4fb8-a807-b4d899788c4c\") " pod="openshift-must-gather-ndlqc/must-gather-drdhp" Feb 16 14:22:45 crc kubenswrapper[4812]: I0216 14:22:45.251130 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ndlqc/must-gather-drdhp" Feb 16 14:22:45 crc kubenswrapper[4812]: I0216 14:22:45.906477 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ndlqc/must-gather-drdhp"] Feb 16 14:22:46 crc kubenswrapper[4812]: I0216 14:22:46.161799 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ndlqc/must-gather-drdhp" event={"ID":"c8483932-ac35-4fb8-a807-b4d899788c4c","Type":"ContainerStarted","Data":"58ba9336f4ef8effe9f1b54673dc3182ab60682a6cdc2680d4e3afe1e198e703"} Feb 16 14:22:50 crc kubenswrapper[4812]: I0216 14:22:50.879257 4812 scope.go:117] "RemoveContainer" containerID="4d303775d5b0eacf35f8cd038a2b3e13d57cfb093693244a2a705750574fe85d" Feb 16 14:22:50 crc kubenswrapper[4812]: E0216 14:22:50.880574 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:22:53 crc kubenswrapper[4812]: I0216 14:22:53.238241 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ndlqc/must-gather-drdhp" event={"ID":"c8483932-ac35-4fb8-a807-b4d899788c4c","Type":"ContainerStarted","Data":"6ba1f54f8a3694595c2809208f07b8e593e95b58c43f72632689248e6e243852"} Feb 16 14:22:53 crc kubenswrapper[4812]: I0216 14:22:53.239107 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ndlqc/must-gather-drdhp" event={"ID":"c8483932-ac35-4fb8-a807-b4d899788c4c","Type":"ContainerStarted","Data":"61aca6a9e911fd0c038d861322ff41280b628f18d736ba00688b0b88c3fc7e12"} Feb 16 14:22:53 crc kubenswrapper[4812]: I0216 14:22:53.267661 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ndlqc/must-gather-drdhp" podStartSLOduration=2.97147089 podStartE2EDuration="9.267635275s" podCreationTimestamp="2026-02-16 14:22:44 +0000 UTC" firstStartedPulling="2026-02-16 14:22:45.897608864 +0000 UTC m=+3054.961939575" lastFinishedPulling="2026-02-16 14:22:52.193773259 +0000 UTC m=+3061.258103960" observedRunningTime="2026-02-16 14:22:53.260737097 +0000 UTC m=+3062.325067818" watchObservedRunningTime="2026-02-16 14:22:53.267635275 +0000 UTC m=+3062.331965966" Feb 16 14:22:54 crc kubenswrapper[4812]: E0216 14:22:54.882224 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:22:58 crc kubenswrapper[4812]: I0216 14:22:58.527034 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ndlqc/crc-debug-qwx77"] Feb 16 14:22:58 crc kubenswrapper[4812]: I0216 14:22:58.529378 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ndlqc/crc-debug-qwx77" Feb 16 14:22:58 crc kubenswrapper[4812]: I0216 14:22:58.676343 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdtmm\" (UniqueName: \"kubernetes.io/projected/057fc41d-80a8-4867-afe9-70b45c3d248f-kube-api-access-fdtmm\") pod \"crc-debug-qwx77\" (UID: \"057fc41d-80a8-4867-afe9-70b45c3d248f\") " pod="openshift-must-gather-ndlqc/crc-debug-qwx77" Feb 16 14:22:58 crc kubenswrapper[4812]: I0216 14:22:58.676821 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/057fc41d-80a8-4867-afe9-70b45c3d248f-host\") pod \"crc-debug-qwx77\" (UID: \"057fc41d-80a8-4867-afe9-70b45c3d248f\") " pod="openshift-must-gather-ndlqc/crc-debug-qwx77" Feb 16 14:22:58 crc kubenswrapper[4812]: I0216 14:22:58.779789 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdtmm\" (UniqueName: \"kubernetes.io/projected/057fc41d-80a8-4867-afe9-70b45c3d248f-kube-api-access-fdtmm\") pod \"crc-debug-qwx77\" (UID: \"057fc41d-80a8-4867-afe9-70b45c3d248f\") " pod="openshift-must-gather-ndlqc/crc-debug-qwx77" Feb 16 14:22:58 crc kubenswrapper[4812]: I0216 14:22:58.780003 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/057fc41d-80a8-4867-afe9-70b45c3d248f-host\") pod \"crc-debug-qwx77\" (UID: \"057fc41d-80a8-4867-afe9-70b45c3d248f\") " pod="openshift-must-gather-ndlqc/crc-debug-qwx77" Feb 16 14:22:58 crc kubenswrapper[4812]: I0216 14:22:58.780387 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/057fc41d-80a8-4867-afe9-70b45c3d248f-host\") pod \"crc-debug-qwx77\" (UID: \"057fc41d-80a8-4867-afe9-70b45c3d248f\") " pod="openshift-must-gather-ndlqc/crc-debug-qwx77" Feb 16 14:22:58 crc kubenswrapper[4812]: I0216 14:22:58.816685 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdtmm\" (UniqueName: \"kubernetes.io/projected/057fc41d-80a8-4867-afe9-70b45c3d248f-kube-api-access-fdtmm\") pod \"crc-debug-qwx77\" (UID: \"057fc41d-80a8-4867-afe9-70b45c3d248f\") " pod="openshift-must-gather-ndlqc/crc-debug-qwx77" Feb 16 14:22:58 crc kubenswrapper[4812]: I0216 14:22:58.858622 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ndlqc/crc-debug-qwx77" Feb 16 14:22:58 crc kubenswrapper[4812]: W0216 14:22:58.904316 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod057fc41d_80a8_4867_afe9_70b45c3d248f.slice/crio-57f467c6857b2dfda1a0c85e215f23bf506290e0f8e3838088e34a80370995f3 WatchSource:0}: Error finding container 57f467c6857b2dfda1a0c85e215f23bf506290e0f8e3838088e34a80370995f3: Status 404 returned error can't find the container with id 57f467c6857b2dfda1a0c85e215f23bf506290e0f8e3838088e34a80370995f3 Feb 16 14:22:59 crc kubenswrapper[4812]: I0216 14:22:59.309123 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ndlqc/crc-debug-qwx77" event={"ID":"057fc41d-80a8-4867-afe9-70b45c3d248f","Type":"ContainerStarted","Data":"57f467c6857b2dfda1a0c85e215f23bf506290e0f8e3838088e34a80370995f3"} Feb 16 14:23:04 crc kubenswrapper[4812]: I0216 14:23:04.879582 4812 scope.go:117] "RemoveContainer" containerID="4d303775d5b0eacf35f8cd038a2b3e13d57cfb093693244a2a705750574fe85d" Feb 16 14:23:04 crc kubenswrapper[4812]: E0216 14:23:04.880688 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:23:06 crc kubenswrapper[4812]: E0216 14:23:06.883825 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:23:12 crc kubenswrapper[4812]: I0216 14:23:12.489825 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ndlqc/crc-debug-qwx77" event={"ID":"057fc41d-80a8-4867-afe9-70b45c3d248f","Type":"ContainerStarted","Data":"ca993a7cb018ee18ac125076025f0c269c115acc8937c359e748dbe7cac3cb2b"} Feb 16 14:23:12 crc kubenswrapper[4812]: I0216 14:23:12.515416 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ndlqc/crc-debug-qwx77" podStartSLOduration=1.5045973 podStartE2EDuration="14.515390607s" podCreationTimestamp="2026-02-16 14:22:58 +0000 UTC" firstStartedPulling="2026-02-16 14:22:58.907484084 +0000 UTC m=+3067.971814785" lastFinishedPulling="2026-02-16 14:23:11.918277391 +0000 UTC m=+3080.982608092" observedRunningTime="2026-02-16 14:23:12.513366209 +0000 UTC m=+3081.577696910" watchObservedRunningTime="2026-02-16 14:23:12.515390607 +0000 UTC m=+3081.579721308" Feb 16 14:23:18 crc kubenswrapper[4812]: I0216 14:23:18.879691 4812 scope.go:117] "RemoveContainer" containerID="4d303775d5b0eacf35f8cd038a2b3e13d57cfb093693244a2a705750574fe85d" Feb 16 14:23:18 crc kubenswrapper[4812]: E0216 14:23:18.882272 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:23:21 crc kubenswrapper[4812]: E0216 14:23:21.892282 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:23:29 crc kubenswrapper[4812]: I0216 14:23:29.657609 4812 generic.go:334] "Generic (PLEG): container finished" podID="057fc41d-80a8-4867-afe9-70b45c3d248f" containerID="ca993a7cb018ee18ac125076025f0c269c115acc8937c359e748dbe7cac3cb2b" exitCode=0 Feb 16 14:23:29 crc kubenswrapper[4812]: I0216 14:23:29.657733 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ndlqc/crc-debug-qwx77" event={"ID":"057fc41d-80a8-4867-afe9-70b45c3d248f","Type":"ContainerDied","Data":"ca993a7cb018ee18ac125076025f0c269c115acc8937c359e748dbe7cac3cb2b"} Feb 16 14:23:30 crc kubenswrapper[4812]: I0216 14:23:30.806912 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ndlqc/crc-debug-qwx77" Feb 16 14:23:30 crc kubenswrapper[4812]: I0216 14:23:30.859664 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ndlqc/crc-debug-qwx77"] Feb 16 14:23:30 crc kubenswrapper[4812]: I0216 14:23:30.873751 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ndlqc/crc-debug-qwx77"] Feb 16 14:23:30 crc kubenswrapper[4812]: I0216 14:23:30.882518 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/057fc41d-80a8-4867-afe9-70b45c3d248f-host\") pod \"057fc41d-80a8-4867-afe9-70b45c3d248f\" (UID: \"057fc41d-80a8-4867-afe9-70b45c3d248f\") " Feb 16 14:23:30 crc kubenswrapper[4812]: I0216 14:23:30.882627 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/057fc41d-80a8-4867-afe9-70b45c3d248f-host" (OuterVolumeSpecName: "host") pod "057fc41d-80a8-4867-afe9-70b45c3d248f" (UID: "057fc41d-80a8-4867-afe9-70b45c3d248f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:23:30 crc kubenswrapper[4812]: I0216 14:23:30.882834 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdtmm\" (UniqueName: \"kubernetes.io/projected/057fc41d-80a8-4867-afe9-70b45c3d248f-kube-api-access-fdtmm\") pod \"057fc41d-80a8-4867-afe9-70b45c3d248f\" (UID: \"057fc41d-80a8-4867-afe9-70b45c3d248f\") " Feb 16 14:23:30 crc kubenswrapper[4812]: I0216 14:23:30.883500 4812 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/057fc41d-80a8-4867-afe9-70b45c3d248f-host\") on node \"crc\" DevicePath \"\"" Feb 16 14:23:30 crc kubenswrapper[4812]: I0216 14:23:30.888822 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/057fc41d-80a8-4867-afe9-70b45c3d248f-kube-api-access-fdtmm" (OuterVolumeSpecName: "kube-api-access-fdtmm") pod "057fc41d-80a8-4867-afe9-70b45c3d248f" (UID: "057fc41d-80a8-4867-afe9-70b45c3d248f"). InnerVolumeSpecName "kube-api-access-fdtmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:23:30 crc kubenswrapper[4812]: I0216 14:23:30.987191 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdtmm\" (UniqueName: \"kubernetes.io/projected/057fc41d-80a8-4867-afe9-70b45c3d248f-kube-api-access-fdtmm\") on node \"crc\" DevicePath \"\"" Feb 16 14:23:32 crc kubenswrapper[4812]: I0216 14:23:32.172281 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ndlqc/crc-debug-qwx77" Feb 16 14:23:32 crc kubenswrapper[4812]: I0216 14:23:32.177817 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="057fc41d-80a8-4867-afe9-70b45c3d248f" path="/var/lib/kubelet/pods/057fc41d-80a8-4867-afe9-70b45c3d248f/volumes" Feb 16 14:23:32 crc kubenswrapper[4812]: I0216 14:23:32.180630 4812 scope.go:117] "RemoveContainer" containerID="ca993a7cb018ee18ac125076025f0c269c115acc8937c359e748dbe7cac3cb2b" Feb 16 14:23:32 crc kubenswrapper[4812]: I0216 14:23:32.192784 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ndlqc/crc-debug-rcf5s"] Feb 16 14:23:32 crc kubenswrapper[4812]: E0216 14:23:32.194010 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="057fc41d-80a8-4867-afe9-70b45c3d248f" containerName="container-00" Feb 16 14:23:32 crc kubenswrapper[4812]: I0216 14:23:32.194050 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="057fc41d-80a8-4867-afe9-70b45c3d248f" containerName="container-00" Feb 16 14:23:32 crc kubenswrapper[4812]: I0216 14:23:32.194329 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="057fc41d-80a8-4867-afe9-70b45c3d248f" containerName="container-00" Feb 16 14:23:32 crc kubenswrapper[4812]: I0216 14:23:32.195542 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ndlqc/crc-debug-rcf5s" Feb 16 14:23:32 crc kubenswrapper[4812]: I0216 14:23:32.308329 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c8e15074-a94b-4a91-8069-ea9f070e3a0d-host\") pod \"crc-debug-rcf5s\" (UID: \"c8e15074-a94b-4a91-8069-ea9f070e3a0d\") " pod="openshift-must-gather-ndlqc/crc-debug-rcf5s" Feb 16 14:23:32 crc kubenswrapper[4812]: I0216 14:23:32.308489 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp95c\" (UniqueName: \"kubernetes.io/projected/c8e15074-a94b-4a91-8069-ea9f070e3a0d-kube-api-access-tp95c\") pod \"crc-debug-rcf5s\" (UID: \"c8e15074-a94b-4a91-8069-ea9f070e3a0d\") " pod="openshift-must-gather-ndlqc/crc-debug-rcf5s" Feb 16 14:23:32 crc kubenswrapper[4812]: I0216 14:23:32.537427 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp95c\" (UniqueName: \"kubernetes.io/projected/c8e15074-a94b-4a91-8069-ea9f070e3a0d-kube-api-access-tp95c\") pod \"crc-debug-rcf5s\" (UID: \"c8e15074-a94b-4a91-8069-ea9f070e3a0d\") " pod="openshift-must-gather-ndlqc/crc-debug-rcf5s" Feb 16 14:23:32 crc kubenswrapper[4812]: I0216 14:23:32.539370 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c8e15074-a94b-4a91-8069-ea9f070e3a0d-host\") pod \"crc-debug-rcf5s\" (UID: \"c8e15074-a94b-4a91-8069-ea9f070e3a0d\") " pod="openshift-must-gather-ndlqc/crc-debug-rcf5s" Feb 16 14:23:32 crc kubenswrapper[4812]: I0216 14:23:32.539581 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c8e15074-a94b-4a91-8069-ea9f070e3a0d-host\") pod \"crc-debug-rcf5s\" (UID: \"c8e15074-a94b-4a91-8069-ea9f070e3a0d\") " pod="openshift-must-gather-ndlqc/crc-debug-rcf5s" Feb 16 14:23:32 crc kubenswrapper[4812]: I0216 14:23:32.577078 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp95c\" (UniqueName: \"kubernetes.io/projected/c8e15074-a94b-4a91-8069-ea9f070e3a0d-kube-api-access-tp95c\") pod \"crc-debug-rcf5s\" (UID: \"c8e15074-a94b-4a91-8069-ea9f070e3a0d\") " pod="openshift-must-gather-ndlqc/crc-debug-rcf5s" Feb 16 14:23:32 crc kubenswrapper[4812]: I0216 14:23:32.589372 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ndlqc/crc-debug-rcf5s" Feb 16 14:23:33 crc kubenswrapper[4812]: I0216 14:23:33.187594 4812 generic.go:334] "Generic (PLEG): container finished" podID="c8e15074-a94b-4a91-8069-ea9f070e3a0d" containerID="9ffb7ab73f5a362f32297203becfc97a38110a7337b1c0df2f0fb2bc7a2107fc" exitCode=1 Feb 16 14:23:33 crc kubenswrapper[4812]: I0216 14:23:33.187694 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ndlqc/crc-debug-rcf5s" event={"ID":"c8e15074-a94b-4a91-8069-ea9f070e3a0d","Type":"ContainerDied","Data":"9ffb7ab73f5a362f32297203becfc97a38110a7337b1c0df2f0fb2bc7a2107fc"} Feb 16 14:23:33 crc kubenswrapper[4812]: I0216 14:23:33.188161 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ndlqc/crc-debug-rcf5s" event={"ID":"c8e15074-a94b-4a91-8069-ea9f070e3a0d","Type":"ContainerStarted","Data":"982435a542522dcbe8e218b9b8c2e123fdff0b556eeca2101a66f4af051985ee"} Feb 16 14:23:33 crc kubenswrapper[4812]: I0216 14:23:33.236485 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ndlqc/crc-debug-rcf5s"] Feb 16 14:23:33 crc kubenswrapper[4812]: I0216 14:23:33.247543 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ndlqc/crc-debug-rcf5s"] Feb 16 14:23:33 crc kubenswrapper[4812]: I0216 14:23:33.879471 4812 scope.go:117] "RemoveContainer" containerID="4d303775d5b0eacf35f8cd038a2b3e13d57cfb093693244a2a705750574fe85d" Feb 16 14:23:33 crc kubenswrapper[4812]: E0216 14:23:33.880199 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:23:34 crc kubenswrapper[4812]: I0216 14:23:34.326379 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ndlqc/crc-debug-rcf5s" Feb 16 14:23:34 crc kubenswrapper[4812]: I0216 14:23:34.507336 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tp95c\" (UniqueName: \"kubernetes.io/projected/c8e15074-a94b-4a91-8069-ea9f070e3a0d-kube-api-access-tp95c\") pod \"c8e15074-a94b-4a91-8069-ea9f070e3a0d\" (UID: \"c8e15074-a94b-4a91-8069-ea9f070e3a0d\") " Feb 16 14:23:34 crc kubenswrapper[4812]: I0216 14:23:34.507766 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c8e15074-a94b-4a91-8069-ea9f070e3a0d-host\") pod \"c8e15074-a94b-4a91-8069-ea9f070e3a0d\" (UID: \"c8e15074-a94b-4a91-8069-ea9f070e3a0d\") " Feb 16 14:23:34 crc kubenswrapper[4812]: I0216 14:23:34.508229 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8e15074-a94b-4a91-8069-ea9f070e3a0d-host" (OuterVolumeSpecName: "host") pod "c8e15074-a94b-4a91-8069-ea9f070e3a0d" (UID: "c8e15074-a94b-4a91-8069-ea9f070e3a0d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:23:34 crc kubenswrapper[4812]: I0216 14:23:34.509146 4812 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c8e15074-a94b-4a91-8069-ea9f070e3a0d-host\") on node \"crc\" DevicePath \"\"" Feb 16 14:23:34 crc kubenswrapper[4812]: I0216 14:23:34.538522 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8e15074-a94b-4a91-8069-ea9f070e3a0d-kube-api-access-tp95c" (OuterVolumeSpecName: "kube-api-access-tp95c") pod "c8e15074-a94b-4a91-8069-ea9f070e3a0d" (UID: "c8e15074-a94b-4a91-8069-ea9f070e3a0d"). InnerVolumeSpecName "kube-api-access-tp95c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:23:34 crc kubenswrapper[4812]: I0216 14:23:34.611337 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tp95c\" (UniqueName: \"kubernetes.io/projected/c8e15074-a94b-4a91-8069-ea9f070e3a0d-kube-api-access-tp95c\") on node \"crc\" DevicePath \"\"" Feb 16 14:23:35 crc kubenswrapper[4812]: I0216 14:23:35.221390 4812 scope.go:117] "RemoveContainer" containerID="9ffb7ab73f5a362f32297203becfc97a38110a7337b1c0df2f0fb2bc7a2107fc" Feb 16 14:23:35 crc kubenswrapper[4812]: I0216 14:23:35.221615 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ndlqc/crc-debug-rcf5s" Feb 16 14:23:35 crc kubenswrapper[4812]: E0216 14:23:35.881025 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:23:35 crc kubenswrapper[4812]: I0216 14:23:35.891502 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8e15074-a94b-4a91-8069-ea9f070e3a0d" path="/var/lib/kubelet/pods/c8e15074-a94b-4a91-8069-ea9f070e3a0d/volumes" Feb 16 14:23:44 crc kubenswrapper[4812]: I0216 14:23:44.879486 4812 scope.go:117] "RemoveContainer" containerID="4d303775d5b0eacf35f8cd038a2b3e13d57cfb093693244a2a705750574fe85d" Feb 16 14:23:44 crc kubenswrapper[4812]: E0216 14:23:44.883014 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:23:49 crc kubenswrapper[4812]: E0216 14:23:49.883171 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:23:57 crc kubenswrapper[4812]: I0216 14:23:57.880095 4812 scope.go:117] "RemoveContainer" containerID="4d303775d5b0eacf35f8cd038a2b3e13d57cfb093693244a2a705750574fe85d" Feb 16 14:23:57 crc kubenswrapper[4812]: E0216 14:23:57.881495 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:24:04 crc kubenswrapper[4812]: E0216 14:24:04.901056 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:24:09 crc kubenswrapper[4812]: I0216 14:24:09.896488 4812 scope.go:117] "RemoveContainer" containerID="4d303775d5b0eacf35f8cd038a2b3e13d57cfb093693244a2a705750574fe85d" Feb 16 14:24:09 crc kubenswrapper[4812]: E0216 14:24:09.899811 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:24:13 crc kubenswrapper[4812]: I0216 14:24:13.053077 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2pdl9"] Feb 16 14:24:13 crc kubenswrapper[4812]: E0216 14:24:13.053954 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8e15074-a94b-4a91-8069-ea9f070e3a0d" containerName="container-00" Feb 16 14:24:13 crc kubenswrapper[4812]: I0216 14:24:13.053972 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8e15074-a94b-4a91-8069-ea9f070e3a0d" containerName="container-00" Feb 16 14:24:13 crc kubenswrapper[4812]: I0216 14:24:13.054195 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8e15074-a94b-4a91-8069-ea9f070e3a0d" containerName="container-00" Feb 16 14:24:13 crc kubenswrapper[4812]: I0216 14:24:13.055800 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2pdl9" Feb 16 14:24:13 crc kubenswrapper[4812]: I0216 14:24:13.082960 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2pdl9"] Feb 16 14:24:13 crc kubenswrapper[4812]: I0216 14:24:13.170251 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvxlf\" (UniqueName: \"kubernetes.io/projected/69398252-468a-4e47-9035-ccdcd911654e-kube-api-access-pvxlf\") pod \"certified-operators-2pdl9\" (UID: \"69398252-468a-4e47-9035-ccdcd911654e\") " pod="openshift-marketplace/certified-operators-2pdl9" Feb 16 14:24:13 crc kubenswrapper[4812]: I0216 14:24:13.170861 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69398252-468a-4e47-9035-ccdcd911654e-utilities\") pod \"certified-operators-2pdl9\" (UID: \"69398252-468a-4e47-9035-ccdcd911654e\") " pod="openshift-marketplace/certified-operators-2pdl9" Feb 16 14:24:13 crc kubenswrapper[4812]: I0216 14:24:13.170976 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69398252-468a-4e47-9035-ccdcd911654e-catalog-content\") pod \"certified-operators-2pdl9\" (UID: \"69398252-468a-4e47-9035-ccdcd911654e\") " pod="openshift-marketplace/certified-operators-2pdl9" Feb 16 14:24:13 crc kubenswrapper[4812]: I0216 14:24:13.272943 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69398252-468a-4e47-9035-ccdcd911654e-catalog-content\") pod \"certified-operators-2pdl9\" (UID: \"69398252-468a-4e47-9035-ccdcd911654e\") " pod="openshift-marketplace/certified-operators-2pdl9" Feb 16 14:24:13 crc kubenswrapper[4812]: I0216 14:24:13.273501 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69398252-468a-4e47-9035-ccdcd911654e-catalog-content\") pod \"certified-operators-2pdl9\" (UID: \"69398252-468a-4e47-9035-ccdcd911654e\") " pod="openshift-marketplace/certified-operators-2pdl9" Feb 16 14:24:13 crc kubenswrapper[4812]: I0216 14:24:13.273629 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvxlf\" (UniqueName: \"kubernetes.io/projected/69398252-468a-4e47-9035-ccdcd911654e-kube-api-access-pvxlf\") pod \"certified-operators-2pdl9\" (UID: \"69398252-468a-4e47-9035-ccdcd911654e\") " pod="openshift-marketplace/certified-operators-2pdl9" Feb 16 14:24:13 crc kubenswrapper[4812]: I0216 14:24:13.274052 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69398252-468a-4e47-9035-ccdcd911654e-utilities\") pod \"certified-operators-2pdl9\" (UID: \"69398252-468a-4e47-9035-ccdcd911654e\") " pod="openshift-marketplace/certified-operators-2pdl9" Feb 16 14:24:13 crc kubenswrapper[4812]: I0216 14:24:13.274307 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69398252-468a-4e47-9035-ccdcd911654e-utilities\") pod \"certified-operators-2pdl9\" (UID: \"69398252-468a-4e47-9035-ccdcd911654e\") " pod="openshift-marketplace/certified-operators-2pdl9" Feb 16 14:24:13 crc kubenswrapper[4812]: I0216 14:24:13.297610 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvxlf\" (UniqueName: \"kubernetes.io/projected/69398252-468a-4e47-9035-ccdcd911654e-kube-api-access-pvxlf\") pod \"certified-operators-2pdl9\" (UID: \"69398252-468a-4e47-9035-ccdcd911654e\") " pod="openshift-marketplace/certified-operators-2pdl9" Feb 16 14:24:13 crc kubenswrapper[4812]: I0216 14:24:13.428882 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2pdl9" Feb 16 14:24:13 crc kubenswrapper[4812]: I0216 14:24:13.953532 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2pdl9"] Feb 16 14:24:14 crc kubenswrapper[4812]: I0216 14:24:14.824702 4812 generic.go:334] "Generic (PLEG): container finished" podID="69398252-468a-4e47-9035-ccdcd911654e" containerID="f9488f087d1efdbdf456bec92fabadb85c6861cba91f43644aa7bb0701029f64" exitCode=0 Feb 16 14:24:14 crc kubenswrapper[4812]: I0216 14:24:14.824777 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2pdl9" event={"ID":"69398252-468a-4e47-9035-ccdcd911654e","Type":"ContainerDied","Data":"f9488f087d1efdbdf456bec92fabadb85c6861cba91f43644aa7bb0701029f64"} Feb 16 14:24:14 crc kubenswrapper[4812]: I0216 14:24:14.825404 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2pdl9" event={"ID":"69398252-468a-4e47-9035-ccdcd911654e","Type":"ContainerStarted","Data":"6f809bdd9740b7a605ec1e4920ca03a27ba858ccc7571225519ab22ad486e592"} Feb 16 14:24:15 crc kubenswrapper[4812]: I0216 14:24:15.837123 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2pdl9" event={"ID":"69398252-468a-4e47-9035-ccdcd911654e","Type":"ContainerStarted","Data":"1a649c7ae2dd1fd063e3289dd26d0c9dda96449b5639beb0a5dd10d8c5e2a039"} Feb 16 14:24:16 crc kubenswrapper[4812]: I0216 14:24:16.868652 4812 generic.go:334] "Generic (PLEG): container finished" podID="69398252-468a-4e47-9035-ccdcd911654e" containerID="1a649c7ae2dd1fd063e3289dd26d0c9dda96449b5639beb0a5dd10d8c5e2a039" exitCode=0 Feb 16 14:24:16 crc kubenswrapper[4812]: I0216 14:24:16.869022 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2pdl9" event={"ID":"69398252-468a-4e47-9035-ccdcd911654e","Type":"ContainerDied","Data":"1a649c7ae2dd1fd063e3289dd26d0c9dda96449b5639beb0a5dd10d8c5e2a039"} Feb 16 14:24:17 crc kubenswrapper[4812]: E0216 14:24:17.880674 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:24:17 crc kubenswrapper[4812]: I0216 14:24:17.893960 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2pdl9" event={"ID":"69398252-468a-4e47-9035-ccdcd911654e","Type":"ContainerStarted","Data":"48e8df181838dace7c5cc6565ad959cc4bc2377f42ea64bd0bdbbfbf5c207b8b"} Feb 16 14:24:17 crc kubenswrapper[4812]: I0216 14:24:17.902893 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2pdl9" podStartSLOduration=2.434549171 podStartE2EDuration="4.902868999s" podCreationTimestamp="2026-02-16 14:24:13 +0000 UTC" firstStartedPulling="2026-02-16 14:24:14.828757526 +0000 UTC m=+3143.893088227" lastFinishedPulling="2026-02-16 14:24:17.297077354 +0000 UTC m=+3146.361408055" observedRunningTime="2026-02-16 14:24:17.899739049 +0000 UTC m=+3146.964069750" watchObservedRunningTime="2026-02-16 14:24:17.902868999 +0000 UTC m=+3146.967199700" Feb 16 14:24:21 crc kubenswrapper[4812]: I0216 14:24:21.885945 4812 scope.go:117] "RemoveContainer" containerID="4d303775d5b0eacf35f8cd038a2b3e13d57cfb093693244a2a705750574fe85d" Feb 16 14:24:21 crc kubenswrapper[4812]: E0216 14:24:21.886860 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:24:23 crc kubenswrapper[4812]: I0216 14:24:23.429532 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2pdl9" Feb 16 14:24:23 crc kubenswrapper[4812]: I0216 14:24:23.430043 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2pdl9" Feb 16 14:24:23 crc kubenswrapper[4812]: I0216 14:24:23.495970 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2pdl9" Feb 16 14:24:24 crc kubenswrapper[4812]: I0216 14:24:24.011758 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2pdl9" Feb 16 14:24:24 crc kubenswrapper[4812]: I0216 14:24:24.088192 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2pdl9"] Feb 16 14:24:25 crc kubenswrapper[4812]: I0216 14:24:25.957364 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2pdl9" podUID="69398252-468a-4e47-9035-ccdcd911654e" containerName="registry-server" containerID="cri-o://48e8df181838dace7c5cc6565ad959cc4bc2377f42ea64bd0bdbbfbf5c207b8b" gracePeriod=2 Feb 16 14:24:26 crc kubenswrapper[4812]: I0216 14:24:26.629112 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2pdl9" Feb 16 14:24:26 crc kubenswrapper[4812]: I0216 14:24:26.692508 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69398252-468a-4e47-9035-ccdcd911654e-catalog-content\") pod \"69398252-468a-4e47-9035-ccdcd911654e\" (UID: \"69398252-468a-4e47-9035-ccdcd911654e\") " Feb 16 14:24:26 crc kubenswrapper[4812]: I0216 14:24:26.692810 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvxlf\" (UniqueName: \"kubernetes.io/projected/69398252-468a-4e47-9035-ccdcd911654e-kube-api-access-pvxlf\") pod \"69398252-468a-4e47-9035-ccdcd911654e\" (UID: \"69398252-468a-4e47-9035-ccdcd911654e\") " Feb 16 14:24:26 crc kubenswrapper[4812]: I0216 14:24:26.692852 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69398252-468a-4e47-9035-ccdcd911654e-utilities\") pod \"69398252-468a-4e47-9035-ccdcd911654e\" (UID: \"69398252-468a-4e47-9035-ccdcd911654e\") " Feb 16 14:24:26 crc kubenswrapper[4812]: I0216 14:24:26.693899 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69398252-468a-4e47-9035-ccdcd911654e-utilities" (OuterVolumeSpecName: "utilities") pod "69398252-468a-4e47-9035-ccdcd911654e" (UID: "69398252-468a-4e47-9035-ccdcd911654e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:24:26 crc kubenswrapper[4812]: I0216 14:24:26.703610 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69398252-468a-4e47-9035-ccdcd911654e-kube-api-access-pvxlf" (OuterVolumeSpecName: "kube-api-access-pvxlf") pod "69398252-468a-4e47-9035-ccdcd911654e" (UID: "69398252-468a-4e47-9035-ccdcd911654e"). InnerVolumeSpecName "kube-api-access-pvxlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:24:26 crc kubenswrapper[4812]: I0216 14:24:26.740099 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69398252-468a-4e47-9035-ccdcd911654e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "69398252-468a-4e47-9035-ccdcd911654e" (UID: "69398252-468a-4e47-9035-ccdcd911654e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:24:26 crc kubenswrapper[4812]: I0216 14:24:26.795695 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69398252-468a-4e47-9035-ccdcd911654e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:24:26 crc kubenswrapper[4812]: I0216 14:24:26.795754 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvxlf\" (UniqueName: \"kubernetes.io/projected/69398252-468a-4e47-9035-ccdcd911654e-kube-api-access-pvxlf\") on node \"crc\" DevicePath \"\"" Feb 16 14:24:26 crc kubenswrapper[4812]: I0216 14:24:26.795776 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69398252-468a-4e47-9035-ccdcd911654e-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:24:26 crc kubenswrapper[4812]: I0216 14:24:26.968814 4812 generic.go:334] "Generic (PLEG): container finished" podID="69398252-468a-4e47-9035-ccdcd911654e" containerID="48e8df181838dace7c5cc6565ad959cc4bc2377f42ea64bd0bdbbfbf5c207b8b" exitCode=0 Feb 16 14:24:26 crc kubenswrapper[4812]: I0216 14:24:26.968866 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2pdl9" event={"ID":"69398252-468a-4e47-9035-ccdcd911654e","Type":"ContainerDied","Data":"48e8df181838dace7c5cc6565ad959cc4bc2377f42ea64bd0bdbbfbf5c207b8b"} Feb 16 14:24:26 crc kubenswrapper[4812]: I0216 14:24:26.968916 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2pdl9" event={"ID":"69398252-468a-4e47-9035-ccdcd911654e","Type":"ContainerDied","Data":"6f809bdd9740b7a605ec1e4920ca03a27ba858ccc7571225519ab22ad486e592"} Feb 16 14:24:26 crc kubenswrapper[4812]: I0216 14:24:26.968943 4812 scope.go:117] "RemoveContainer" containerID="48e8df181838dace7c5cc6565ad959cc4bc2377f42ea64bd0bdbbfbf5c207b8b" Feb 16 14:24:26 crc kubenswrapper[4812]: I0216 14:24:26.968983 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2pdl9" Feb 16 14:24:27 crc kubenswrapper[4812]: I0216 14:24:27.004493 4812 scope.go:117] "RemoveContainer" containerID="1a649c7ae2dd1fd063e3289dd26d0c9dda96449b5639beb0a5dd10d8c5e2a039" Feb 16 14:24:27 crc kubenswrapper[4812]: I0216 14:24:27.025007 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2pdl9"] Feb 16 14:24:27 crc kubenswrapper[4812]: I0216 14:24:27.029212 4812 scope.go:117] "RemoveContainer" containerID="f9488f087d1efdbdf456bec92fabadb85c6861cba91f43644aa7bb0701029f64" Feb 16 14:24:27 crc kubenswrapper[4812]: I0216 14:24:27.037276 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2pdl9"] Feb 16 14:24:27 crc kubenswrapper[4812]: I0216 14:24:27.077079 4812 scope.go:117] "RemoveContainer" containerID="48e8df181838dace7c5cc6565ad959cc4bc2377f42ea64bd0bdbbfbf5c207b8b" Feb 16 14:24:27 crc kubenswrapper[4812]: E0216 14:24:27.077826 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48e8df181838dace7c5cc6565ad959cc4bc2377f42ea64bd0bdbbfbf5c207b8b\": container with ID starting with 48e8df181838dace7c5cc6565ad959cc4bc2377f42ea64bd0bdbbfbf5c207b8b not found: ID does not exist" containerID="48e8df181838dace7c5cc6565ad959cc4bc2377f42ea64bd0bdbbfbf5c207b8b" Feb 16 14:24:27 crc kubenswrapper[4812]: I0216 14:24:27.077872 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48e8df181838dace7c5cc6565ad959cc4bc2377f42ea64bd0bdbbfbf5c207b8b"} err="failed to get container status \"48e8df181838dace7c5cc6565ad959cc4bc2377f42ea64bd0bdbbfbf5c207b8b\": rpc error: code = NotFound desc = could not find container \"48e8df181838dace7c5cc6565ad959cc4bc2377f42ea64bd0bdbbfbf5c207b8b\": container with ID starting with 48e8df181838dace7c5cc6565ad959cc4bc2377f42ea64bd0bdbbfbf5c207b8b not found: ID does not exist" Feb 16 14:24:27 crc kubenswrapper[4812]: I0216 14:24:27.077896 4812 scope.go:117] "RemoveContainer" containerID="1a649c7ae2dd1fd063e3289dd26d0c9dda96449b5639beb0a5dd10d8c5e2a039" Feb 16 14:24:27 crc kubenswrapper[4812]: E0216 14:24:27.078493 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a649c7ae2dd1fd063e3289dd26d0c9dda96449b5639beb0a5dd10d8c5e2a039\": container with ID starting with 1a649c7ae2dd1fd063e3289dd26d0c9dda96449b5639beb0a5dd10d8c5e2a039 not found: ID does not exist" containerID="1a649c7ae2dd1fd063e3289dd26d0c9dda96449b5639beb0a5dd10d8c5e2a039" Feb 16 14:24:27 crc kubenswrapper[4812]: I0216 14:24:27.078545 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a649c7ae2dd1fd063e3289dd26d0c9dda96449b5639beb0a5dd10d8c5e2a039"} err="failed to get container status \"1a649c7ae2dd1fd063e3289dd26d0c9dda96449b5639beb0a5dd10d8c5e2a039\": rpc error: code = NotFound desc = could not find container \"1a649c7ae2dd1fd063e3289dd26d0c9dda96449b5639beb0a5dd10d8c5e2a039\": container with ID starting with 1a649c7ae2dd1fd063e3289dd26d0c9dda96449b5639beb0a5dd10d8c5e2a039 not found: ID does not exist" Feb 16 14:24:27 crc kubenswrapper[4812]: I0216 14:24:27.078564 4812 scope.go:117] "RemoveContainer" containerID="f9488f087d1efdbdf456bec92fabadb85c6861cba91f43644aa7bb0701029f64" Feb 16 14:24:27 crc kubenswrapper[4812]: E0216 14:24:27.078986 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9488f087d1efdbdf456bec92fabadb85c6861cba91f43644aa7bb0701029f64\": container with ID starting with f9488f087d1efdbdf456bec92fabadb85c6861cba91f43644aa7bb0701029f64 not found: ID does not exist" containerID="f9488f087d1efdbdf456bec92fabadb85c6861cba91f43644aa7bb0701029f64" Feb 16 14:24:27 crc kubenswrapper[4812]: I0216 14:24:27.079046 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9488f087d1efdbdf456bec92fabadb85c6861cba91f43644aa7bb0701029f64"} err="failed to get container status \"f9488f087d1efdbdf456bec92fabadb85c6861cba91f43644aa7bb0701029f64\": rpc error: code = NotFound desc = could not find container \"f9488f087d1efdbdf456bec92fabadb85c6861cba91f43644aa7bb0701029f64\": container with ID starting with f9488f087d1efdbdf456bec92fabadb85c6861cba91f43644aa7bb0701029f64 not found: ID does not exist" Feb 16 14:24:27 crc kubenswrapper[4812]: I0216 14:24:27.896480 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69398252-468a-4e47-9035-ccdcd911654e" path="/var/lib/kubelet/pods/69398252-468a-4e47-9035-ccdcd911654e/volumes" Feb 16 14:24:28 crc kubenswrapper[4812]: E0216 14:24:28.885176 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:24:35 crc kubenswrapper[4812]: I0216 14:24:35.412831 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_96cb02af-deed-4da5-96cf-28d69592caed/init-config-reloader/0.log" Feb 16 14:24:35 crc kubenswrapper[4812]: I0216 14:24:35.638966 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_96cb02af-deed-4da5-96cf-28d69592caed/alertmanager/0.log" Feb 16 14:24:35 crc kubenswrapper[4812]: I0216 14:24:35.645354 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_96cb02af-deed-4da5-96cf-28d69592caed/init-config-reloader/0.log" Feb 16 14:24:35 crc kubenswrapper[4812]: I0216 14:24:35.720221 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_96cb02af-deed-4da5-96cf-28d69592caed/config-reloader/0.log" Feb 16 14:24:35 crc kubenswrapper[4812]: I0216 14:24:35.821504 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-dd87694f4-8qsk9_c51849be-b016-41a0-9959-654f56fd10c2/barbican-api/0.log" Feb 16 14:24:35 crc kubenswrapper[4812]: I0216 14:24:35.978770 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-dd87694f4-8qsk9_c51849be-b016-41a0-9959-654f56fd10c2/barbican-api-log/0.log" Feb 16 14:24:36 crc kubenswrapper[4812]: I0216 14:24:36.011973 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-57b9fd55d-zs44x_b743ee5f-7d4b-4e37-b46f-449f1c1155f9/barbican-keystone-listener/0.log" Feb 16 14:24:36 crc kubenswrapper[4812]: I0216 14:24:36.079431 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-57b9fd55d-zs44x_b743ee5f-7d4b-4e37-b46f-449f1c1155f9/barbican-keystone-listener-log/0.log" Feb 16 14:24:36 crc kubenswrapper[4812]: I0216 14:24:36.233347 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-59c64f6659-7rr8v_1e7c7a64-8967-4ee4-af38-c6d384fbd722/barbican-worker/0.log" Feb 16 14:24:36 crc kubenswrapper[4812]: I0216 14:24:36.303550 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-59c64f6659-7rr8v_1e7c7a64-8967-4ee4-af38-c6d384fbd722/barbican-worker-log/0.log" Feb 16 14:24:36 crc kubenswrapper[4812]: I0216 14:24:36.879888 4812 scope.go:117] "RemoveContainer" containerID="4d303775d5b0eacf35f8cd038a2b3e13d57cfb093693244a2a705750574fe85d" Feb 16 14:24:36 crc kubenswrapper[4812]: E0216 14:24:36.880702 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:24:36 crc kubenswrapper[4812]: I0216 14:24:36.970787 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dae1afc9-20e3-4925-bcbf-cda49f1f4011/ceilometer-central-agent/0.log" Feb 16 14:24:36 crc kubenswrapper[4812]: I0216 14:24:36.974074 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dae1afc9-20e3-4925-bcbf-cda49f1f4011/ceilometer-notification-agent/0.log" Feb 16 14:24:37 crc kubenswrapper[4812]: I0216 14:24:37.059217 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dae1afc9-20e3-4925-bcbf-cda49f1f4011/proxy-httpd/0.log" Feb 16 14:24:37 crc kubenswrapper[4812]: I0216 14:24:37.099287 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dae1afc9-20e3-4925-bcbf-cda49f1f4011/sg-core/0.log" Feb 16 14:24:37 crc kubenswrapper[4812]: I0216 14:24:37.231611 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_73d33b57-0c02-4e05-b1a2-0d3075385bd4/cinder-api/0.log" Feb 16 14:24:37 crc kubenswrapper[4812]: I0216 14:24:37.336603 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_73d33b57-0c02-4e05-b1a2-0d3075385bd4/cinder-api-log/0.log" Feb 16 14:24:37 crc kubenswrapper[4812]: I0216 14:24:37.482147 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_a822cac0-26cb-430a-8c4f-78d11b7451dd/cinder-scheduler/0.log" Feb 16 14:24:37 crc kubenswrapper[4812]: I0216 14:24:37.502551 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_a822cac0-26cb-430a-8c4f-78d11b7451dd/probe/0.log" Feb 16 14:24:37 crc kubenswrapper[4812]: I0216 14:24:37.731426 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-compactor-0_0a320041-5efb-4a26-b9e4-cdf85da40717/loki-compactor/0.log" Feb 16 14:24:37 crc kubenswrapper[4812]: I0216 14:24:37.881190 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-distributor-585d9bcbc-6xb2f_a89a28de-f5cd-413a-bbbc-0f58c3f5fd1f/loki-distributor/0.log" Feb 16 14:24:38 crc kubenswrapper[4812]: I0216 14:24:38.005360 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7f8685b49f-hb5vr_6d8ae81a-a9ec-4f2f-8369-0164c6c1923c/gateway/0.log" Feb 16 14:24:38 crc kubenswrapper[4812]: I0216 14:24:38.105689 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7f8685b49f-j48px_cef2c2bd-5dea-4bf2-8fcf-a3cadc541023/gateway/0.log" Feb 16 14:24:38 crc kubenswrapper[4812]: I0216 14:24:38.312655 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-index-gateway-0_33486bd3-170e-428a-ab58-dd7bd52e6a53/loki-index-gateway/0.log" Feb 16 14:24:38 crc kubenswrapper[4812]: I0216 14:24:38.430217 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-ingester-0_51f12264-af08-4cf2-9e76-98dc91b0b7a8/loki-ingester/0.log" Feb 16 14:24:38 crc kubenswrapper[4812]: I0216 14:24:38.636905 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-querier-58c84b5844-p88ww_826ded0a-246d-40b7-87d1-22fa8224d506/loki-querier/0.log" Feb 16 14:24:38 crc kubenswrapper[4812]: I0216 14:24:38.688077 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-query-frontend-67bb4dfcd8-2l86h_d909c793-0634-48f0-8f71-4f21dc9979af/loki-query-frontend/0.log" Feb 16 14:24:38 crc kubenswrapper[4812]: I0216 14:24:38.964106 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-89c5cd4d5-6sbj2_47232a67-6356-4806-83a7-74719fb464fc/init/0.log" Feb 16 14:24:39 crc kubenswrapper[4812]: I0216 14:24:39.177333 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-89c5cd4d5-6sbj2_47232a67-6356-4806-83a7-74719fb464fc/init/0.log" Feb 16 14:24:39 crc kubenswrapper[4812]: I0216 14:24:39.215713 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-89c5cd4d5-6sbj2_47232a67-6356-4806-83a7-74719fb464fc/dnsmasq-dns/0.log" Feb 16 14:24:39 crc kubenswrapper[4812]: I0216 14:24:39.269432 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_97b7cfdc-998e-4667-be36-ab781bf0fb41/glance-httpd/0.log" Feb 16 14:24:39 crc kubenswrapper[4812]: I0216 14:24:39.393109 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_97b7cfdc-998e-4667-be36-ab781bf0fb41/glance-log/0.log" Feb 16 14:24:39 crc kubenswrapper[4812]: I0216 14:24:39.480306 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_906d6897-4bab-46a7-ade3-c5c02bf43c0f/glance-httpd/0.log" Feb 16 14:24:39 crc kubenswrapper[4812]: I0216 14:24:39.532192 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_906d6897-4bab-46a7-ade3-c5c02bf43c0f/glance-log/0.log" Feb 16 14:24:39 crc kubenswrapper[4812]: E0216 14:24:39.918288 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:24:40 crc kubenswrapper[4812]: I0216 14:24:40.210201 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29520841-jq24z_32a3c3bd-297d-49b8-a083-19f25cacf8c2/keystone-cron/0.log" Feb 16 14:24:40 crc kubenswrapper[4812]: I0216 14:24:40.329945 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-f7dcf4bcb-h6jf8_de3c3908-5942-4fd3-ac7b-6ca838a36198/keystone-api/0.log" Feb 16 14:24:40 crc kubenswrapper[4812]: I0216 14:24:40.468353 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_f508573d-dccc-4922-9173-48c8c9a8e134/kube-state-metrics/0.log" Feb 16 14:24:40 crc kubenswrapper[4812]: I0216 14:24:40.641706 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5687b6b775-mt8dp_9c203d1a-c01d-4dda-889c-4a09ea0c616c/neutron-api/0.log" Feb 16 14:24:40 crc kubenswrapper[4812]: I0216 14:24:40.747368 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5687b6b775-mt8dp_9c203d1a-c01d-4dda-889c-4a09ea0c616c/neutron-httpd/0.log" Feb 16 14:24:41 crc kubenswrapper[4812]: I0216 14:24:41.144330 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ce0b3ece-701b-4853-ace9-e21f7a68fc31/nova-api-log/0.log" Feb 16 14:24:41 crc kubenswrapper[4812]: I0216 14:24:41.146554 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_f64dd2c6-0b02-4ed2-afd2-04d93a9c7d68/nova-cell0-conductor-conductor/0.log" Feb 16 14:24:41 crc kubenswrapper[4812]: I0216 14:24:41.236242 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ce0b3ece-701b-4853-ace9-e21f7a68fc31/nova-api-api/0.log" Feb 16 14:24:41 crc kubenswrapper[4812]: I0216 14:24:41.511357 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_31ddd42b-256a-4ab3-a348-bfa32b61cd2e/nova-cell1-conductor-conductor/0.log" Feb 16 14:24:41 crc kubenswrapper[4812]: I0216 14:24:41.525915 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_e27fec58-8fdf-4df4-890a-ebec94ae3904/nova-cell1-novncproxy-novncproxy/0.log" Feb 16 14:24:41 crc kubenswrapper[4812]: I0216 14:24:41.878718 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_bb74f45a-d06e-4770-a282-ea0c7305ef2c/nova-metadata-log/0.log" Feb 16 14:24:41 crc kubenswrapper[4812]: I0216 14:24:41.908038 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_c03ccdce-b222-4ef5-be48-9d0ab6465290/nova-scheduler-scheduler/0.log" Feb 16 14:24:42 crc kubenswrapper[4812]: I0216 14:24:42.279676 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7/mysql-bootstrap/0.log" Feb 16 14:24:42 crc kubenswrapper[4812]: I0216 14:24:42.563121 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7/galera/0.log" Feb 16 14:24:42 crc kubenswrapper[4812]: I0216 14:24:42.594191 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_d88fd8b8-bc85-4bd8-8a16-3dad3cb623a7/mysql-bootstrap/0.log" Feb 16 14:24:42 crc kubenswrapper[4812]: I0216 14:24:42.821546 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_bb74f45a-d06e-4770-a282-ea0c7305ef2c/nova-metadata-metadata/0.log" Feb 16 14:24:42 crc kubenswrapper[4812]: I0216 14:24:42.837273 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_11179909-1e24-429d-9d33-e2c448e1cf6b/mysql-bootstrap/0.log" Feb 16 14:24:43 crc kubenswrapper[4812]: I0216 14:24:43.044275 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_11179909-1e24-429d-9d33-e2c448e1cf6b/mysql-bootstrap/0.log" Feb 16 14:24:43 crc kubenswrapper[4812]: I0216 14:24:43.107670 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_528da5b1-5cfd-42dd-bfaf-ad82eb579d97/openstackclient/0.log" Feb 16 14:24:43 crc kubenswrapper[4812]: I0216 14:24:43.175495 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_11179909-1e24-429d-9d33-e2c448e1cf6b/galera/0.log" Feb 16 14:24:43 crc kubenswrapper[4812]: I0216 14:24:43.471229 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-7dzhm_2ebd3c08-88e8-4b5d-9ce9-9386b2c4db70/ovn-controller/0.log" Feb 16 14:24:43 crc kubenswrapper[4812]: I0216 14:24:43.608078 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-tcrnd_56d45d6a-4e06-471e-bdc8-60d60af85545/openstack-network-exporter/0.log" Feb 16 14:24:43 crc kubenswrapper[4812]: I0216 14:24:43.853931 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-hjxr5_619a5cb7-30a8-4ac4-955e-d2c97ce49fda/ovsdb-server-init/0.log" Feb 16 14:24:44 crc kubenswrapper[4812]: I0216 14:24:44.269390 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-hjxr5_619a5cb7-30a8-4ac4-955e-d2c97ce49fda/ovsdb-server/0.log" Feb 16 14:24:44 crc kubenswrapper[4812]: I0216 14:24:44.295672 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-hjxr5_619a5cb7-30a8-4ac4-955e-d2c97ce49fda/ovsdb-server-init/0.log" Feb 16 14:24:44 crc kubenswrapper[4812]: I0216 14:24:44.318206 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-hjxr5_619a5cb7-30a8-4ac4-955e-d2c97ce49fda/ovs-vswitchd/0.log" Feb 16 14:24:44 crc kubenswrapper[4812]: I0216 14:24:44.477673 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c/openstack-network-exporter/0.log" Feb 16 14:24:44 crc kubenswrapper[4812]: I0216 14:24:44.608313 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_8b3ef2f4-5a54-4fbd-9ecf-de1f0174095c/ovn-northd/0.log" Feb 16 14:24:44 crc kubenswrapper[4812]: I0216 14:24:44.651623 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516/openstack-network-exporter/0.log" Feb 16 14:24:44 crc kubenswrapper[4812]: I0216 14:24:44.736051 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2f5ace4f-0f44-4ea3-9f71-0f4bd7a6f516/ovsdbserver-nb/0.log" Feb 16 14:24:44 crc kubenswrapper[4812]: I0216 14:24:44.811460 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7eae7df6-e3b7-4ac5-bb18-6b781744747d/openstack-network-exporter/0.log" Feb 16 14:24:45 crc kubenswrapper[4812]: I0216 14:24:45.025950 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7eae7df6-e3b7-4ac5-bb18-6b781744747d/ovsdbserver-sb/0.log" Feb 16 14:24:45 crc kubenswrapper[4812]: I0216 14:24:45.150665 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5f575bfd48-dqv2k_fa2da193-05ce-4fae-968e-5f9a7e2efd2c/placement-log/0.log" Feb 16 14:24:45 crc kubenswrapper[4812]: I0216 14:24:45.244576 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5f575bfd48-dqv2k_fa2da193-05ce-4fae-968e-5f9a7e2efd2c/placement-api/0.log" Feb 16 14:24:45 crc kubenswrapper[4812]: I0216 14:24:45.328702 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_e3116255-f9dd-4ce3-bf47-779d963bbb98/init-config-reloader/0.log" Feb 16 14:24:45 crc kubenswrapper[4812]: I0216 14:24:45.601362 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_e3116255-f9dd-4ce3-bf47-779d963bbb98/thanos-sidecar/0.log" Feb 16 14:24:45 crc kubenswrapper[4812]: I0216 14:24:45.633031 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_e3116255-f9dd-4ce3-bf47-779d963bbb98/config-reloader/0.log" Feb 16 14:24:45 crc kubenswrapper[4812]: I0216 14:24:45.650432 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_e3116255-f9dd-4ce3-bf47-779d963bbb98/prometheus/0.log" Feb 16 14:24:45 crc kubenswrapper[4812]: I0216 14:24:45.677272 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_e3116255-f9dd-4ce3-bf47-779d963bbb98/init-config-reloader/0.log" Feb 16 14:24:45 crc kubenswrapper[4812]: I0216 14:24:45.872489 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f00dce1e-5743-4129-b78b-4a29351da7ed/setup-container/0.log" Feb 16 14:24:46 crc kubenswrapper[4812]: I0216 14:24:46.071236 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f00dce1e-5743-4129-b78b-4a29351da7ed/setup-container/0.log" Feb 16 14:24:46 crc kubenswrapper[4812]: I0216 14:24:46.202467 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f00dce1e-5743-4129-b78b-4a29351da7ed/rabbitmq/0.log" Feb 16 14:24:46 crc kubenswrapper[4812]: I0216 14:24:46.242363 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1/setup-container/0.log" Feb 16 14:24:46 crc kubenswrapper[4812]: I0216 14:24:46.719210 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1/setup-container/0.log" Feb 16 14:24:46 crc kubenswrapper[4812]: I0216 14:24:46.788916 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_aa9e7fbb-f7a7-4a2f-91cc-77a4d1cd24f1/rabbitmq/0.log" Feb 16 14:24:46 crc kubenswrapper[4812]: I0216 14:24:46.838512 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6d67c77f6c-gcgq7_49cfe7b6-0403-4fae-8c40-9fdec91bceee/proxy-httpd/0.log" Feb 16 14:24:46 crc kubenswrapper[4812]: I0216 14:24:46.915259 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6d67c77f6c-gcgq7_49cfe7b6-0403-4fae-8c40-9fdec91bceee/proxy-server/0.log" Feb 16 14:24:46 crc kubenswrapper[4812]: I0216 14:24:46.991969 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-dkwvj_3e7d63b8-7d3a-4169-b939-2ea11895b53a/swift-ring-rebalance/0.log" Feb 16 14:24:47 crc kubenswrapper[4812]: I0216 14:24:47.226468 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7f34d582-3b55-4d2a-91b3-c64acd57981f/account-auditor/0.log" Feb 16 14:24:47 crc kubenswrapper[4812]: I0216 14:24:47.244271 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7f34d582-3b55-4d2a-91b3-c64acd57981f/account-reaper/0.log" Feb 16 14:24:47 crc kubenswrapper[4812]: I0216 14:24:47.346899 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7f34d582-3b55-4d2a-91b3-c64acd57981f/account-replicator/0.log" Feb 16 14:24:47 crc kubenswrapper[4812]: I0216 14:24:47.513544 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7f34d582-3b55-4d2a-91b3-c64acd57981f/container-auditor/0.log" Feb 16 14:24:47 crc kubenswrapper[4812]: I0216 14:24:47.523979 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7f34d582-3b55-4d2a-91b3-c64acd57981f/account-server/0.log" Feb 16 14:24:47 crc kubenswrapper[4812]: I0216 14:24:47.540261 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7f34d582-3b55-4d2a-91b3-c64acd57981f/container-replicator/0.log" Feb 16 14:24:47 crc kubenswrapper[4812]: I0216 14:24:47.644409 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7f34d582-3b55-4d2a-91b3-c64acd57981f/container-server/0.log" Feb 16 14:24:47 crc kubenswrapper[4812]: I0216 14:24:47.769574 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7f34d582-3b55-4d2a-91b3-c64acd57981f/object-auditor/0.log" Feb 16 14:24:47 crc kubenswrapper[4812]: I0216 14:24:47.791130 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7f34d582-3b55-4d2a-91b3-c64acd57981f/container-updater/0.log" Feb 16 14:24:47 crc kubenswrapper[4812]: I0216 14:24:47.799339 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7f34d582-3b55-4d2a-91b3-c64acd57981f/object-expirer/0.log" Feb 16 14:24:47 crc kubenswrapper[4812]: I0216 14:24:47.927182 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7f34d582-3b55-4d2a-91b3-c64acd57981f/object-replicator/0.log" Feb 16 14:24:48 crc kubenswrapper[4812]: I0216 14:24:48.021400 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7f34d582-3b55-4d2a-91b3-c64acd57981f/object-server/0.log" Feb 16 14:24:48 crc kubenswrapper[4812]: I0216 14:24:48.034810 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7f34d582-3b55-4d2a-91b3-c64acd57981f/rsync/0.log" Feb 16 14:24:48 crc kubenswrapper[4812]: I0216 14:24:48.062014 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7f34d582-3b55-4d2a-91b3-c64acd57981f/object-updater/0.log" Feb 16 14:24:48 crc kubenswrapper[4812]: I0216 14:24:48.230126 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7f34d582-3b55-4d2a-91b3-c64acd57981f/swift-recon-cron/0.log" Feb 16 14:24:50 crc kubenswrapper[4812]: I0216 14:24:50.748489 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_95382144-b401-41b0-bf26-8a5503df91f6/memcached/0.log" Feb 16 14:24:51 crc kubenswrapper[4812]: I0216 14:24:51.897928 4812 scope.go:117] "RemoveContainer" containerID="4d303775d5b0eacf35f8cd038a2b3e13d57cfb093693244a2a705750574fe85d" Feb 16 14:24:51 crc kubenswrapper[4812]: E0216 14:24:51.898583 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:24:51 crc kubenswrapper[4812]: E0216 14:24:51.900638 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:25:06 crc kubenswrapper[4812]: I0216 14:25:06.879583 4812 scope.go:117] "RemoveContainer" containerID="4d303775d5b0eacf35f8cd038a2b3e13d57cfb093693244a2a705750574fe85d" Feb 16 14:25:06 crc kubenswrapper[4812]: E0216 14:25:06.880366 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:25:06 crc kubenswrapper[4812]: E0216 14:25:06.881595 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:25:16 crc kubenswrapper[4812]: I0216 14:25:16.012334 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-8zb8t_38ed5722-af29-41e2-a323-dfe0c39d537d/manager/0.log" Feb 16 14:25:16 crc kubenswrapper[4812]: I0216 14:25:16.279077 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ec0b6c29d1e9466755884cac70f48ead9fe1ee06d1693958ffff3251182nwd8_a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2/util/0.log" Feb 16 14:25:16 crc kubenswrapper[4812]: I0216 14:25:16.545518 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ec0b6c29d1e9466755884cac70f48ead9fe1ee06d1693958ffff3251182nwd8_a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2/util/0.log" Feb 16 14:25:16 crc kubenswrapper[4812]: I0216 14:25:16.552342 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ec0b6c29d1e9466755884cac70f48ead9fe1ee06d1693958ffff3251182nwd8_a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2/pull/0.log" Feb 16 14:25:16 crc kubenswrapper[4812]: I0216 14:25:16.739094 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ec0b6c29d1e9466755884cac70f48ead9fe1ee06d1693958ffff3251182nwd8_a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2/pull/0.log" Feb 16 14:25:17 crc kubenswrapper[4812]: I0216 14:25:17.195880 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ec0b6c29d1e9466755884cac70f48ead9fe1ee06d1693958ffff3251182nwd8_a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2/util/0.log" Feb 16 14:25:17 crc kubenswrapper[4812]: I0216 14:25:17.275134 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ec0b6c29d1e9466755884cac70f48ead9fe1ee06d1693958ffff3251182nwd8_a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2/pull/0.log" Feb 16 14:25:17 crc kubenswrapper[4812]: I0216 14:25:17.467662 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ec0b6c29d1e9466755884cac70f48ead9fe1ee06d1693958ffff3251182nwd8_a6aa3c82-ffc4-4a8f-8ab7-5e4b32ee90b2/extract/0.log" Feb 16 14:25:17 crc kubenswrapper[4812]: I0216 14:25:17.602080 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-6vps7_484efbc6-46c2-44e3-8edb-8273b347f394/manager/0.log" Feb 16 14:25:17 crc kubenswrapper[4812]: I0216 14:25:17.833617 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-ckx77_2e2f91a6-d4f8-422e-bfc1-a78ab10f1338/manager/0.log" Feb 16 14:25:17 crc kubenswrapper[4812]: I0216 14:25:17.928549 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-xsnnh_961308e3-cfdc-43ac-8cf0-63cdc9e8900d/manager/0.log" Feb 16 14:25:18 crc kubenswrapper[4812]: I0216 14:25:18.557880 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-5lnxb_2e961da9-05f1-4eaf-ba3a-5d5bc14b7704/manager/0.log" Feb 16 14:25:18 crc kubenswrapper[4812]: I0216 14:25:18.888422 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-ts4nk_aefc705b-fdf3-4a72-9a38-a78907603aca/manager/0.log" Feb 16 14:25:18 crc kubenswrapper[4812]: I0216 14:25:18.982709 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-z7qxl_e9326a1e-ab44-4168-96a4-d140c2f95a88/manager/0.log" Feb 16 14:25:19 crc kubenswrapper[4812]: I0216 14:25:19.277905 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-sh8cv_eb78077d-7a72-4293-a0bf-8f7ce62aad8d/manager/0.log" Feb 16 14:25:19 crc kubenswrapper[4812]: I0216 14:25:19.406375 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-9j44f_d14b07fa-996e-407e-b4ff-9cb90a7c8ca1/manager/0.log" Feb 16 14:25:19 crc kubenswrapper[4812]: I0216 14:25:19.590511 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-ggbhg_7ef01067-cb64-47cd-a065-9d9677b9646c/manager/0.log" Feb 16 14:25:19 crc kubenswrapper[4812]: E0216 14:25:19.881083 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:25:19 crc kubenswrapper[4812]: I0216 14:25:19.984473 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-c6zzj_eec306d2-c02f-4a72-bc69-95ee26d33688/manager/0.log" Feb 16 14:25:20 crc kubenswrapper[4812]: I0216 14:25:20.163316 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-7sb4m_0c6d2754-f4e2-497a-aa47-aa568aa9805c/manager/0.log" Feb 16 14:25:20 crc kubenswrapper[4812]: I0216 14:25:20.537917 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cdjllh_06224f00-35c9-4aae-9dbc-c803abd7de2c/manager/0.log" Feb 16 14:25:21 crc kubenswrapper[4812]: I0216 14:25:21.112469 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-79487dd5dc-7hqsn_d8b435a8-6cec-4517-bf21-3241511a1cbc/operator/0.log" Feb 16 14:25:21 crc kubenswrapper[4812]: I0216 14:25:21.367587 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-fgk9h_e649f9b1-93d2-4d2d-abeb-a67d78038fd9/registry-server/0.log" Feb 16 14:25:21 crc kubenswrapper[4812]: I0216 14:25:21.892634 4812 scope.go:117] "RemoveContainer" containerID="4d303775d5b0eacf35f8cd038a2b3e13d57cfb093693244a2a705750574fe85d" Feb 16 14:25:21 crc kubenswrapper[4812]: E0216 14:25:21.894707 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:25:21 crc kubenswrapper[4812]: I0216 14:25:21.896592 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-c5l89_fcba7077-c2f4-4d80-ac24-955ddf007acc/manager/0.log" Feb 16 14:25:22 crc kubenswrapper[4812]: I0216 14:25:22.083364 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-2vz54_ce4b529c-2a7c-4919-8a99-78aa7eae9828/manager/0.log" Feb 16 14:25:22 crc kubenswrapper[4812]: I0216 14:25:22.086810 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-84k8d_f05e0adf-a8ed-41cf-9808-b10b0c36e48d/manager/0.log" Feb 16 14:25:22 crc kubenswrapper[4812]: I0216 14:25:22.309662 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-nr8zz_e90db606-561d-4cbc-b3ca-7078e17685ad/operator/0.log" Feb 16 14:25:22 crc kubenswrapper[4812]: I0216 14:25:22.356189 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-lwc2x_21b09391-dc85-4bf1-9210-882f3ee0af01/manager/0.log" Feb 16 14:25:22 crc kubenswrapper[4812]: I0216 14:25:22.496146 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-778459db5b-d66gm_3cdb1565-bb99-4e18-9089-7a2112685704/manager/0.log" Feb 16 14:25:22 crc kubenswrapper[4812]: I0216 14:25:22.729301 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-mpb9g_ac3c8476-8d98-47f3-b962-23b404164ac2/manager/0.log" Feb 16 14:25:22 crc kubenswrapper[4812]: I0216 14:25:22.986375 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-tzzsc_7bd980be-8cfe-448f-a2e0-7dae86e075c9/manager/0.log" Feb 16 14:25:23 crc kubenswrapper[4812]: I0216 14:25:23.131636 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-866896b95f-8plmx_e75f5735-aff0-453a-8be9-4f55966c7232/manager/0.log" Feb 16 14:25:23 crc kubenswrapper[4812]: I0216 14:25:23.839016 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-rtgvj_62b330d8-6f6a-4daf-ba84-fada3debae44/manager/0.log" Feb 16 14:25:30 crc kubenswrapper[4812]: I0216 14:25:30.882636 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 14:25:30 crc kubenswrapper[4812]: E0216 14:25:30.978015 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 14:25:30 crc kubenswrapper[4812]: E0216 14:25:30.978215 4812 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 14:25:30 crc kubenswrapper[4812]: E0216 14:25:30.978685 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mq54s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-krnzs_openstack(a7d4eae6-781f-4675-a6c3-ee0f1589c735): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 14:25:30 crc kubenswrapper[4812]: E0216 14:25:30.979928 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:25:36 crc kubenswrapper[4812]: I0216 14:25:36.879735 4812 scope.go:117] "RemoveContainer" containerID="4d303775d5b0eacf35f8cd038a2b3e13d57cfb093693244a2a705750574fe85d" Feb 16 14:25:36 crc kubenswrapper[4812]: E0216 14:25:36.881004 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:25:44 crc kubenswrapper[4812]: E0216 14:25:44.882166 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:25:46 crc kubenswrapper[4812]: I0216 14:25:46.521209 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-klckm_89281b9f-7c51-470c-aa86-bdfd398f2a2a/control-plane-machine-set-operator/0.log" Feb 16 14:25:46 crc kubenswrapper[4812]: I0216 14:25:46.770579 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4cx9t_3d90f788-7a7d-4ccd-9e4c-8a0fdcfdd479/kube-rbac-proxy/0.log" Feb 16 14:25:46 crc kubenswrapper[4812]: I0216 14:25:46.813850 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4cx9t_3d90f788-7a7d-4ccd-9e4c-8a0fdcfdd479/machine-api-operator/0.log" Feb 16 14:25:48 crc kubenswrapper[4812]: I0216 14:25:48.879359 4812 scope.go:117] "RemoveContainer" containerID="4d303775d5b0eacf35f8cd038a2b3e13d57cfb093693244a2a705750574fe85d" Feb 16 14:25:48 crc kubenswrapper[4812]: E0216 14:25:48.881237 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:25:56 crc kubenswrapper[4812]: E0216 14:25:56.882938 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:26:00 crc kubenswrapper[4812]: I0216 14:26:00.864701 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-bss4n_ffb00ae0-8006-44ba-8c11-eed07e479ec6/cert-manager-controller/0.log" Feb 16 14:26:00 crc kubenswrapper[4812]: I0216 14:26:00.881747 4812 scope.go:117] "RemoveContainer" containerID="4d303775d5b0eacf35f8cd038a2b3e13d57cfb093693244a2a705750574fe85d" Feb 16 14:26:00 crc kubenswrapper[4812]: E0216 14:26:00.882258 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:26:01 crc kubenswrapper[4812]: I0216 14:26:01.097762 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-5bdk9_9b3c3773-e9da-431b-863a-0a3df06713d0/cert-manager-cainjector/0.log" Feb 16 14:26:01 crc kubenswrapper[4812]: I0216 14:26:01.127757 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-mb4rm_9cb816f6-841f-4759-9598-ec4ea11806c4/cert-manager-webhook/0.log" Feb 16 14:26:07 crc kubenswrapper[4812]: E0216 14:26:07.882175 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:26:12 crc kubenswrapper[4812]: I0216 14:26:12.879837 4812 scope.go:117] "RemoveContainer" containerID="4d303775d5b0eacf35f8cd038a2b3e13d57cfb093693244a2a705750574fe85d" Feb 16 14:26:12 crc kubenswrapper[4812]: E0216 14:26:12.880648 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:26:14 crc kubenswrapper[4812]: I0216 14:26:14.646047 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-jh9xp_2833a171-e8b3-4a2e-99bd-28b4724d3123/nmstate-console-plugin/0.log" Feb 16 14:26:14 crc kubenswrapper[4812]: I0216 14:26:14.839370 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-5dtvn_b68968e3-1037-494a-8c4b-f6f4ae6c3e02/nmstate-handler/0.log" Feb 16 14:26:14 crc kubenswrapper[4812]: I0216 14:26:14.893322 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-8zchp_de488e97-05f3-4b9c-abd2-2ae259997bc1/kube-rbac-proxy/0.log" Feb 16 14:26:14 crc kubenswrapper[4812]: I0216 14:26:14.982535 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-8zchp_de488e97-05f3-4b9c-abd2-2ae259997bc1/nmstate-metrics/0.log" Feb 16 14:26:15 crc kubenswrapper[4812]: I0216 14:26:15.132261 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-g42s8_acdc5133-d5db-443d-b935-f284f767ac99/nmstate-operator/0.log" Feb 16 14:26:15 crc kubenswrapper[4812]: I0216 14:26:15.185911 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-8kkws_d5f47728-5a50-45df-8379-cc1e7779f00c/nmstate-webhook/0.log" Feb 16 14:26:18 crc kubenswrapper[4812]: E0216 14:26:18.881314 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:26:24 crc kubenswrapper[4812]: I0216 14:26:24.878971 4812 scope.go:117] "RemoveContainer" containerID="4d303775d5b0eacf35f8cd038a2b3e13d57cfb093693244a2a705750574fe85d" Feb 16 14:26:25 crc kubenswrapper[4812]: I0216 14:26:25.832054 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" event={"ID":"3c55e49a-a30d-4950-a690-c33d9f8a31e0","Type":"ContainerStarted","Data":"c5f58f6a974f79b6081c75e064f880be69c771923e683ab20b29c5f39942ca14"} Feb 16 14:26:31 crc kubenswrapper[4812]: I0216 14:26:31.012969 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7db4b9ddb7-grxq9_f8571da4-b4fd-4d36-923e-f0924cb993e9/kube-rbac-proxy/0.log" Feb 16 14:26:31 crc kubenswrapper[4812]: I0216 14:26:31.064316 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7db4b9ddb7-grxq9_f8571da4-b4fd-4d36-923e-f0924cb993e9/manager/0.log" Feb 16 14:26:33 crc kubenswrapper[4812]: E0216 14:26:33.890762 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:26:45 crc kubenswrapper[4812]: E0216 14:26:45.881045 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:26:45 crc kubenswrapper[4812]: I0216 14:26:45.945055 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-q86d9_9e3d83dd-a02e-46b8-8cb0-e3840347e5ad/prometheus-operator/0.log" Feb 16 14:26:46 crc kubenswrapper[4812]: I0216 14:26:46.139028 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5d55f77766-2w7pj_cc311aeb-05a8-4b4d-abe0-c35db319d48a/prometheus-operator-admission-webhook/0.log" Feb 16 14:26:46 crc kubenswrapper[4812]: I0216 14:26:46.209269 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5d55f77766-x8xvj_40d22a6e-3db9-43c6-9ca4-560ef32ca2a1/prometheus-operator-admission-webhook/0.log" Feb 16 14:26:46 crc kubenswrapper[4812]: I0216 14:26:46.373384 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-9kvgm_90f1d72f-119e-4971-bfad-a3210f07e473/operator/0.log" Feb 16 14:26:46 crc kubenswrapper[4812]: I0216 14:26:46.410983 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-9fzvk_a975b82f-9342-4bf8-812a-0d2188aeef74/perses-operator/0.log" Feb 16 14:26:59 crc kubenswrapper[4812]: E0216 14:26:59.883118 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:27:03 crc kubenswrapper[4812]: I0216 14:27:03.847949 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-k45w2_5977d87d-ec62-4a14-8df1-d1b37209d48d/kube-rbac-proxy/0.log" Feb 16 14:27:03 crc kubenswrapper[4812]: I0216 14:27:03.996192 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-k45w2_5977d87d-ec62-4a14-8df1-d1b37209d48d/controller/0.log" Feb 16 14:27:04 crc kubenswrapper[4812]: I0216 14:27:04.108699 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zjgs_7aa158c4-bd4e-46d5-92f5-8635e722a673/cp-frr-files/0.log" Feb 16 14:27:04 crc kubenswrapper[4812]: I0216 14:27:04.344164 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zjgs_7aa158c4-bd4e-46d5-92f5-8635e722a673/cp-reloader/0.log" Feb 16 14:27:04 crc kubenswrapper[4812]: I0216 14:27:04.357834 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zjgs_7aa158c4-bd4e-46d5-92f5-8635e722a673/cp-frr-files/0.log" Feb 16 14:27:04 crc kubenswrapper[4812]: I0216 14:27:04.420608 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zjgs_7aa158c4-bd4e-46d5-92f5-8635e722a673/cp-metrics/0.log" Feb 16 14:27:04 crc kubenswrapper[4812]: I0216 14:27:04.495115 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zjgs_7aa158c4-bd4e-46d5-92f5-8635e722a673/cp-reloader/0.log" Feb 16 14:27:04 crc kubenswrapper[4812]: I0216 14:27:04.675974 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zjgs_7aa158c4-bd4e-46d5-92f5-8635e722a673/cp-frr-files/0.log" Feb 16 14:27:04 crc kubenswrapper[4812]: I0216 14:27:04.707884 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zjgs_7aa158c4-bd4e-46d5-92f5-8635e722a673/cp-reloader/0.log" Feb 16 14:27:04 crc kubenswrapper[4812]: I0216 14:27:04.764258 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zjgs_7aa158c4-bd4e-46d5-92f5-8635e722a673/cp-metrics/0.log" Feb 16 14:27:04 crc kubenswrapper[4812]: I0216 14:27:04.801030 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zjgs_7aa158c4-bd4e-46d5-92f5-8635e722a673/cp-metrics/0.log" Feb 16 14:27:04 crc kubenswrapper[4812]: I0216 14:27:04.995659 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zjgs_7aa158c4-bd4e-46d5-92f5-8635e722a673/cp-frr-files/0.log" Feb 16 14:27:05 crc kubenswrapper[4812]: I0216 14:27:05.006447 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zjgs_7aa158c4-bd4e-46d5-92f5-8635e722a673/controller/0.log" Feb 16 14:27:05 crc kubenswrapper[4812]: I0216 14:27:05.020521 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zjgs_7aa158c4-bd4e-46d5-92f5-8635e722a673/cp-reloader/0.log" Feb 16 14:27:05 crc kubenswrapper[4812]: I0216 14:27:05.049370 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zjgs_7aa158c4-bd4e-46d5-92f5-8635e722a673/cp-metrics/0.log" Feb 16 14:27:05 crc kubenswrapper[4812]: I0216 14:27:05.227674 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zjgs_7aa158c4-bd4e-46d5-92f5-8635e722a673/frr-metrics/0.log" Feb 16 14:27:05 crc kubenswrapper[4812]: I0216 14:27:05.235304 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zjgs_7aa158c4-bd4e-46d5-92f5-8635e722a673/kube-rbac-proxy/0.log" Feb 16 14:27:05 crc kubenswrapper[4812]: I0216 14:27:05.286768 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zjgs_7aa158c4-bd4e-46d5-92f5-8635e722a673/kube-rbac-proxy-frr/0.log" Feb 16 14:27:05 crc kubenswrapper[4812]: I0216 14:27:05.840490 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-2h7tc_78add67a-1f63-4b2a-88b5-39f2ef90c06e/frr-k8s-webhook-server/0.log" Feb 16 14:27:05 crc kubenswrapper[4812]: I0216 14:27:05.840767 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zjgs_7aa158c4-bd4e-46d5-92f5-8635e722a673/reloader/0.log" Feb 16 14:27:06 crc kubenswrapper[4812]: I0216 14:27:06.275475 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-6zjgs_7aa158c4-bd4e-46d5-92f5-8635e722a673/frr/0.log" Feb 16 14:27:06 crc kubenswrapper[4812]: I0216 14:27:06.288426 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7f8ffc447f-2c5xc_6746b0af-7980-47b3-bc36-374bc1bdc6d1/manager/0.log" Feb 16 14:27:06 crc kubenswrapper[4812]: I0216 14:27:06.445288 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6d44948dbf-dlj6m_4567903c-04af-432e-8c9d-7e7150f94226/webhook-server/0.log" Feb 16 14:27:06 crc kubenswrapper[4812]: I0216 14:27:06.490822 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-wpmzn_c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4/kube-rbac-proxy/0.log" Feb 16 14:27:06 crc kubenswrapper[4812]: I0216 14:27:06.878834 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-wpmzn_c3b5d645-9c9d-48e5-aeb1-9a3dcd39c0a4/speaker/0.log" Feb 16 14:27:14 crc kubenswrapper[4812]: E0216 14:27:14.881270 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:27:22 crc kubenswrapper[4812]: I0216 14:27:22.944920 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l9hm9_a794509c-f142-4184-80c5-38d6095917df/util/0.log" Feb 16 14:27:23 crc kubenswrapper[4812]: I0216 14:27:23.207331 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l9hm9_a794509c-f142-4184-80c5-38d6095917df/util/0.log" Feb 16 14:27:23 crc kubenswrapper[4812]: I0216 14:27:23.283744 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l9hm9_a794509c-f142-4184-80c5-38d6095917df/pull/0.log" Feb 16 14:27:23 crc kubenswrapper[4812]: I0216 14:27:23.305413 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l9hm9_a794509c-f142-4184-80c5-38d6095917df/pull/0.log" Feb 16 14:27:23 crc kubenswrapper[4812]: I0216 14:27:23.481087 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l9hm9_a794509c-f142-4184-80c5-38d6095917df/extract/0.log" Feb 16 14:27:23 crc kubenswrapper[4812]: I0216 14:27:23.503391 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l9hm9_a794509c-f142-4184-80c5-38d6095917df/util/0.log" Feb 16 14:27:23 crc kubenswrapper[4812]: I0216 14:27:23.534181 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651l9hm9_a794509c-f142-4184-80c5-38d6095917df/pull/0.log" Feb 16 14:27:23 crc kubenswrapper[4812]: I0216 14:27:23.656907 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088c849_81df20ac-ca53-4b60-8813-b91f69263210/util/0.log" Feb 16 14:27:23 crc kubenswrapper[4812]: I0216 14:27:23.843248 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088c849_81df20ac-ca53-4b60-8813-b91f69263210/util/0.log" Feb 16 14:27:23 crc kubenswrapper[4812]: I0216 14:27:23.909360 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088c849_81df20ac-ca53-4b60-8813-b91f69263210/pull/0.log" Feb 16 14:27:23 crc kubenswrapper[4812]: I0216 14:27:23.930976 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088c849_81df20ac-ca53-4b60-8813-b91f69263210/pull/0.log" Feb 16 14:27:24 crc kubenswrapper[4812]: I0216 14:27:24.132231 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088c849_81df20ac-ca53-4b60-8813-b91f69263210/extract/0.log" Feb 16 14:27:24 crc kubenswrapper[4812]: I0216 14:27:24.151421 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088c849_81df20ac-ca53-4b60-8813-b91f69263210/pull/0.log" Feb 16 14:27:24 crc kubenswrapper[4812]: I0216 14:27:24.173954 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088c849_81df20ac-ca53-4b60-8813-b91f69263210/util/0.log" Feb 16 14:27:24 crc kubenswrapper[4812]: I0216 14:27:24.295821 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213np9w6_f7fc9c91-5507-47f3-a456-4e415f0fab79/util/0.log" Feb 16 14:27:24 crc kubenswrapper[4812]: I0216 14:27:24.509618 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213np9w6_f7fc9c91-5507-47f3-a456-4e415f0fab79/util/0.log" Feb 16 14:27:24 crc kubenswrapper[4812]: I0216 14:27:24.528976 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213np9w6_f7fc9c91-5507-47f3-a456-4e415f0fab79/pull/0.log" Feb 16 14:27:24 crc kubenswrapper[4812]: I0216 14:27:24.559628 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213np9w6_f7fc9c91-5507-47f3-a456-4e415f0fab79/pull/0.log" Feb 16 14:27:24 crc kubenswrapper[4812]: I0216 14:27:24.710421 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213np9w6_f7fc9c91-5507-47f3-a456-4e415f0fab79/util/0.log" Feb 16 14:27:24 crc kubenswrapper[4812]: I0216 14:27:24.724494 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213np9w6_f7fc9c91-5507-47f3-a456-4e415f0fab79/pull/0.log" Feb 16 14:27:24 crc kubenswrapper[4812]: I0216 14:27:24.729793 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213np9w6_f7fc9c91-5507-47f3-a456-4e415f0fab79/extract/0.log" Feb 16 14:27:24 crc kubenswrapper[4812]: I0216 14:27:24.939546 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-88kx2_00153ebb-09b0-4de5-82ce-8e71fc35acac/extract-utilities/0.log" Feb 16 14:27:25 crc kubenswrapper[4812]: I0216 14:27:25.339914 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-88kx2_00153ebb-09b0-4de5-82ce-8e71fc35acac/extract-utilities/0.log" Feb 16 14:27:25 crc kubenswrapper[4812]: I0216 14:27:25.340175 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-88kx2_00153ebb-09b0-4de5-82ce-8e71fc35acac/extract-content/0.log" Feb 16 14:27:25 crc kubenswrapper[4812]: I0216 14:27:25.369487 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-88kx2_00153ebb-09b0-4de5-82ce-8e71fc35acac/extract-content/0.log" Feb 16 14:27:25 crc kubenswrapper[4812]: I0216 14:27:25.548579 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-88kx2_00153ebb-09b0-4de5-82ce-8e71fc35acac/extract-content/0.log" Feb 16 14:27:25 crc kubenswrapper[4812]: I0216 14:27:25.595117 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-88kx2_00153ebb-09b0-4de5-82ce-8e71fc35acac/extract-utilities/0.log" Feb 16 14:27:25 crc kubenswrapper[4812]: I0216 14:27:25.795971 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4f7gt_4dcfe737-220e-464b-b4dd-7956ceec99b6/extract-utilities/0.log" Feb 16 14:27:26 crc kubenswrapper[4812]: I0216 14:27:26.240136 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-88kx2_00153ebb-09b0-4de5-82ce-8e71fc35acac/registry-server/0.log" Feb 16 14:27:26 crc kubenswrapper[4812]: I0216 14:27:26.443789 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4f7gt_4dcfe737-220e-464b-b4dd-7956ceec99b6/extract-content/0.log" Feb 16 14:27:26 crc kubenswrapper[4812]: I0216 14:27:26.527688 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4f7gt_4dcfe737-220e-464b-b4dd-7956ceec99b6/extract-content/0.log" Feb 16 14:27:26 crc kubenswrapper[4812]: I0216 14:27:26.596521 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4f7gt_4dcfe737-220e-464b-b4dd-7956ceec99b6/extract-utilities/0.log" Feb 16 14:27:26 crc kubenswrapper[4812]: I0216 14:27:26.691230 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4f7gt_4dcfe737-220e-464b-b4dd-7956ceec99b6/extract-utilities/0.log" Feb 16 14:27:26 crc kubenswrapper[4812]: I0216 14:27:26.714962 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4f7gt_4dcfe737-220e-464b-b4dd-7956ceec99b6/extract-content/0.log" Feb 16 14:27:26 crc kubenswrapper[4812]: I0216 14:27:26.954083 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecallcpm_3be7ee4e-d1c9-4c45-87b8-0959f910fe9a/util/0.log" Feb 16 14:27:27 crc kubenswrapper[4812]: I0216 14:27:27.206858 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecallcpm_3be7ee4e-d1c9-4c45-87b8-0959f910fe9a/util/0.log" Feb 16 14:27:27 crc kubenswrapper[4812]: I0216 14:27:27.214826 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecallcpm_3be7ee4e-d1c9-4c45-87b8-0959f910fe9a/pull/0.log" Feb 16 14:27:27 crc kubenswrapper[4812]: I0216 14:27:27.241549 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4f7gt_4dcfe737-220e-464b-b4dd-7956ceec99b6/registry-server/0.log" Feb 16 14:27:27 crc kubenswrapper[4812]: I0216 14:27:27.318567 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecallcpm_3be7ee4e-d1c9-4c45-87b8-0959f910fe9a/pull/0.log" Feb 16 14:27:27 crc kubenswrapper[4812]: I0216 14:27:27.392007 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecallcpm_3be7ee4e-d1c9-4c45-87b8-0959f910fe9a/pull/0.log" Feb 16 14:27:27 crc kubenswrapper[4812]: I0216 14:27:27.417752 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecallcpm_3be7ee4e-d1c9-4c45-87b8-0959f910fe9a/util/0.log" Feb 16 14:27:27 crc kubenswrapper[4812]: I0216 14:27:27.444079 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecallcpm_3be7ee4e-d1c9-4c45-87b8-0959f910fe9a/extract/0.log" Feb 16 14:27:27 crc kubenswrapper[4812]: I0216 14:27:27.668031 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-lm499_62c219b7-14b9-4105-8dcd-195446a4b07d/marketplace-operator/0.log" Feb 16 14:27:27 crc kubenswrapper[4812]: I0216 14:27:27.679638 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wspdc_97f0d30e-e1e9-4b04-a667-9774b17b6e1d/extract-utilities/0.log" Feb 16 14:27:27 crc kubenswrapper[4812]: I0216 14:27:27.922437 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wspdc_97f0d30e-e1e9-4b04-a667-9774b17b6e1d/extract-utilities/0.log" Feb 16 14:27:27 crc kubenswrapper[4812]: I0216 14:27:27.968358 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wspdc_97f0d30e-e1e9-4b04-a667-9774b17b6e1d/extract-content/0.log" Feb 16 14:27:27 crc kubenswrapper[4812]: I0216 14:27:27.973655 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wspdc_97f0d30e-e1e9-4b04-a667-9774b17b6e1d/extract-content/0.log" Feb 16 14:27:28 crc kubenswrapper[4812]: I0216 14:27:28.127960 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wspdc_97f0d30e-e1e9-4b04-a667-9774b17b6e1d/extract-utilities/0.log" Feb 16 14:27:28 crc kubenswrapper[4812]: I0216 14:27:28.142626 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wspdc_97f0d30e-e1e9-4b04-a667-9774b17b6e1d/extract-content/0.log" Feb 16 14:27:28 crc kubenswrapper[4812]: I0216 14:27:28.182171 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k2kpr_8cb0d463-0679-4810-a6fa-7e56d77677db/extract-utilities/0.log" Feb 16 14:27:28 crc kubenswrapper[4812]: I0216 14:27:28.325300 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wspdc_97f0d30e-e1e9-4b04-a667-9774b17b6e1d/registry-server/0.log" Feb 16 14:27:28 crc kubenswrapper[4812]: I0216 14:27:28.436496 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k2kpr_8cb0d463-0679-4810-a6fa-7e56d77677db/extract-utilities/0.log" Feb 16 14:27:28 crc kubenswrapper[4812]: I0216 14:27:28.480568 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k2kpr_8cb0d463-0679-4810-a6fa-7e56d77677db/extract-content/0.log" Feb 16 14:27:28 crc kubenswrapper[4812]: I0216 14:27:28.482232 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k2kpr_8cb0d463-0679-4810-a6fa-7e56d77677db/extract-content/0.log" Feb 16 14:27:28 crc kubenswrapper[4812]: I0216 14:27:28.644234 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k2kpr_8cb0d463-0679-4810-a6fa-7e56d77677db/extract-utilities/0.log" Feb 16 14:27:28 crc kubenswrapper[4812]: I0216 14:27:28.679951 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k2kpr_8cb0d463-0679-4810-a6fa-7e56d77677db/extract-content/0.log" Feb 16 14:27:28 crc kubenswrapper[4812]: E0216 14:27:28.887601 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:27:29 crc kubenswrapper[4812]: I0216 14:27:29.118254 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k2kpr_8cb0d463-0679-4810-a6fa-7e56d77677db/registry-server/0.log" Feb 16 14:27:43 crc kubenswrapper[4812]: E0216 14:27:43.881799 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:27:45 crc kubenswrapper[4812]: I0216 14:27:45.606830 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-q86d9_9e3d83dd-a02e-46b8-8cb0-e3840347e5ad/prometheus-operator/0.log" Feb 16 14:27:45 crc kubenswrapper[4812]: I0216 14:27:45.637820 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5d55f77766-x8xvj_40d22a6e-3db9-43c6-9ca4-560ef32ca2a1/prometheus-operator-admission-webhook/0.log" Feb 16 14:27:45 crc kubenswrapper[4812]: I0216 14:27:45.694336 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5d55f77766-2w7pj_cc311aeb-05a8-4b4d-abe0-c35db319d48a/prometheus-operator-admission-webhook/0.log" Feb 16 14:27:45 crc kubenswrapper[4812]: I0216 14:27:45.839663 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-9fzvk_a975b82f-9342-4bf8-812a-0d2188aeef74/perses-operator/0.log" Feb 16 14:27:45 crc kubenswrapper[4812]: I0216 14:27:45.875589 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-9kvgm_90f1d72f-119e-4971-bfad-a3210f07e473/operator/0.log" Feb 16 14:27:58 crc kubenswrapper[4812]: E0216 14:27:58.883432 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:28:01 crc kubenswrapper[4812]: I0216 14:28:01.986727 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7db4b9ddb7-grxq9_f8571da4-b4fd-4d36-923e-f0924cb993e9/manager/0.log" Feb 16 14:28:02 crc kubenswrapper[4812]: I0216 14:28:02.024478 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7db4b9ddb7-grxq9_f8571da4-b4fd-4d36-923e-f0924cb993e9/kube-rbac-proxy/0.log" Feb 16 14:28:11 crc kubenswrapper[4812]: E0216 14:28:11.893105 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:28:26 crc kubenswrapper[4812]: E0216 14:28:26.884137 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:28:40 crc kubenswrapper[4812]: E0216 14:28:40.882933 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:28:44 crc kubenswrapper[4812]: I0216 14:28:44.548527 4812 patch_prober.go:28] interesting pod/machine-config-daemon-c6mn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:28:44 crc kubenswrapper[4812]: I0216 14:28:44.549379 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:28:54 crc kubenswrapper[4812]: E0216 14:28:54.882939 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:29:07 crc kubenswrapper[4812]: E0216 14:29:07.884827 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:29:09 crc kubenswrapper[4812]: I0216 14:29:09.185854 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2f6kd"] Feb 16 14:29:09 crc kubenswrapper[4812]: E0216 14:29:09.186726 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69398252-468a-4e47-9035-ccdcd911654e" containerName="extract-content" Feb 16 14:29:09 crc kubenswrapper[4812]: I0216 14:29:09.186744 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="69398252-468a-4e47-9035-ccdcd911654e" containerName="extract-content" Feb 16 14:29:09 crc kubenswrapper[4812]: E0216 14:29:09.186795 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69398252-468a-4e47-9035-ccdcd911654e" containerName="extract-utilities" Feb 16 14:29:09 crc kubenswrapper[4812]: I0216 14:29:09.186803 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="69398252-468a-4e47-9035-ccdcd911654e" containerName="extract-utilities" Feb 16 14:29:09 crc kubenswrapper[4812]: E0216 14:29:09.186818 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69398252-468a-4e47-9035-ccdcd911654e" containerName="registry-server" Feb 16 14:29:09 crc kubenswrapper[4812]: I0216 14:29:09.186827 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="69398252-468a-4e47-9035-ccdcd911654e" containerName="registry-server" Feb 16 14:29:09 crc kubenswrapper[4812]: I0216 14:29:09.187060 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="69398252-468a-4e47-9035-ccdcd911654e" containerName="registry-server" Feb 16 14:29:09 crc kubenswrapper[4812]: I0216 14:29:09.188952 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2f6kd" Feb 16 14:29:09 crc kubenswrapper[4812]: I0216 14:29:09.209617 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2f6kd"] Feb 16 14:29:09 crc kubenswrapper[4812]: I0216 14:29:09.268392 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fc3242c-4670-42e1-be02-12cdac84dd0d-catalog-content\") pod \"redhat-marketplace-2f6kd\" (UID: \"6fc3242c-4670-42e1-be02-12cdac84dd0d\") " pod="openshift-marketplace/redhat-marketplace-2f6kd" Feb 16 14:29:09 crc kubenswrapper[4812]: I0216 14:29:09.268523 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcrj5\" (UniqueName: \"kubernetes.io/projected/6fc3242c-4670-42e1-be02-12cdac84dd0d-kube-api-access-dcrj5\") pod \"redhat-marketplace-2f6kd\" (UID: \"6fc3242c-4670-42e1-be02-12cdac84dd0d\") " pod="openshift-marketplace/redhat-marketplace-2f6kd" Feb 16 14:29:09 crc kubenswrapper[4812]: I0216 14:29:09.268602 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fc3242c-4670-42e1-be02-12cdac84dd0d-utilities\") pod \"redhat-marketplace-2f6kd\" (UID: \"6fc3242c-4670-42e1-be02-12cdac84dd0d\") " pod="openshift-marketplace/redhat-marketplace-2f6kd" Feb 16 14:29:09 crc kubenswrapper[4812]: I0216 14:29:09.371199 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fc3242c-4670-42e1-be02-12cdac84dd0d-catalog-content\") pod \"redhat-marketplace-2f6kd\" (UID: \"6fc3242c-4670-42e1-be02-12cdac84dd0d\") " pod="openshift-marketplace/redhat-marketplace-2f6kd" Feb 16 14:29:09 crc kubenswrapper[4812]: I0216 14:29:09.371317 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcrj5\" (UniqueName: \"kubernetes.io/projected/6fc3242c-4670-42e1-be02-12cdac84dd0d-kube-api-access-dcrj5\") pod \"redhat-marketplace-2f6kd\" (UID: \"6fc3242c-4670-42e1-be02-12cdac84dd0d\") " pod="openshift-marketplace/redhat-marketplace-2f6kd" Feb 16 14:29:09 crc kubenswrapper[4812]: I0216 14:29:09.371393 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fc3242c-4670-42e1-be02-12cdac84dd0d-utilities\") pod \"redhat-marketplace-2f6kd\" (UID: \"6fc3242c-4670-42e1-be02-12cdac84dd0d\") " pod="openshift-marketplace/redhat-marketplace-2f6kd" Feb 16 14:29:09 crc kubenswrapper[4812]: I0216 14:29:09.372177 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fc3242c-4670-42e1-be02-12cdac84dd0d-catalog-content\") pod \"redhat-marketplace-2f6kd\" (UID: \"6fc3242c-4670-42e1-be02-12cdac84dd0d\") " pod="openshift-marketplace/redhat-marketplace-2f6kd" Feb 16 14:29:09 crc kubenswrapper[4812]: I0216 14:29:09.372239 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fc3242c-4670-42e1-be02-12cdac84dd0d-utilities\") pod \"redhat-marketplace-2f6kd\" (UID: \"6fc3242c-4670-42e1-be02-12cdac84dd0d\") " pod="openshift-marketplace/redhat-marketplace-2f6kd" Feb 16 14:29:09 crc kubenswrapper[4812]: I0216 14:29:09.408839 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcrj5\" (UniqueName: \"kubernetes.io/projected/6fc3242c-4670-42e1-be02-12cdac84dd0d-kube-api-access-dcrj5\") pod \"redhat-marketplace-2f6kd\" (UID: \"6fc3242c-4670-42e1-be02-12cdac84dd0d\") " pod="openshift-marketplace/redhat-marketplace-2f6kd" Feb 16 14:29:09 crc kubenswrapper[4812]: I0216 14:29:09.562179 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2f6kd" Feb 16 14:29:10 crc kubenswrapper[4812]: I0216 14:29:10.269687 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2f6kd"] Feb 16 14:29:10 crc kubenswrapper[4812]: I0216 14:29:10.469503 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2f6kd" event={"ID":"6fc3242c-4670-42e1-be02-12cdac84dd0d","Type":"ContainerStarted","Data":"bbede98f7fa003dc666d555f6137f0e8aa121bd21e8112baba2d069b5e221487"} Feb 16 14:29:11 crc kubenswrapper[4812]: I0216 14:29:11.480917 4812 generic.go:334] "Generic (PLEG): container finished" podID="6fc3242c-4670-42e1-be02-12cdac84dd0d" containerID="337e0ab0320cfabd0e0298a2854d48e439855f86cb0b54bc5e996ce03f4dd908" exitCode=0 Feb 16 14:29:11 crc kubenswrapper[4812]: I0216 14:29:11.481011 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2f6kd" event={"ID":"6fc3242c-4670-42e1-be02-12cdac84dd0d","Type":"ContainerDied","Data":"337e0ab0320cfabd0e0298a2854d48e439855f86cb0b54bc5e996ce03f4dd908"} Feb 16 14:29:13 crc kubenswrapper[4812]: I0216 14:29:13.508222 4812 generic.go:334] "Generic (PLEG): container finished" podID="6fc3242c-4670-42e1-be02-12cdac84dd0d" containerID="aeae09609cf7cde4b88009b5dbb50e9ea25376e5263c7ed4fed80dfd4ef74097" exitCode=0 Feb 16 14:29:13 crc kubenswrapper[4812]: I0216 14:29:13.508307 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2f6kd" event={"ID":"6fc3242c-4670-42e1-be02-12cdac84dd0d","Type":"ContainerDied","Data":"aeae09609cf7cde4b88009b5dbb50e9ea25376e5263c7ed4fed80dfd4ef74097"} Feb 16 14:29:14 crc kubenswrapper[4812]: I0216 14:29:14.520937 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2f6kd" event={"ID":"6fc3242c-4670-42e1-be02-12cdac84dd0d","Type":"ContainerStarted","Data":"d2bc5190cb5a7c02a721772f87013da025915928d62c0fc9fdab43d4a60562bd"} Feb 16 14:29:14 crc kubenswrapper[4812]: I0216 14:29:14.546671 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2f6kd" podStartSLOduration=3.084201493 podStartE2EDuration="5.546649995s" podCreationTimestamp="2026-02-16 14:29:09 +0000 UTC" firstStartedPulling="2026-02-16 14:29:11.483764426 +0000 UTC m=+3440.548095127" lastFinishedPulling="2026-02-16 14:29:13.946212928 +0000 UTC m=+3443.010543629" observedRunningTime="2026-02-16 14:29:14.541097957 +0000 UTC m=+3443.605428688" watchObservedRunningTime="2026-02-16 14:29:14.546649995 +0000 UTC m=+3443.610980706" Feb 16 14:29:14 crc kubenswrapper[4812]: I0216 14:29:14.548728 4812 patch_prober.go:28] interesting pod/machine-config-daemon-c6mn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:29:14 crc kubenswrapper[4812]: I0216 14:29:14.548803 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:29:19 crc kubenswrapper[4812]: I0216 14:29:19.563413 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2f6kd" Feb 16 14:29:19 crc kubenswrapper[4812]: I0216 14:29:19.564092 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2f6kd" Feb 16 14:29:19 crc kubenswrapper[4812]: I0216 14:29:19.637275 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2f6kd" Feb 16 14:29:19 crc kubenswrapper[4812]: I0216 14:29:19.702015 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2f6kd" Feb 16 14:29:19 crc kubenswrapper[4812]: I0216 14:29:19.904390 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2f6kd"] Feb 16 14:29:21 crc kubenswrapper[4812]: I0216 14:29:21.642712 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2f6kd" podUID="6fc3242c-4670-42e1-be02-12cdac84dd0d" containerName="registry-server" containerID="cri-o://d2bc5190cb5a7c02a721772f87013da025915928d62c0fc9fdab43d4a60562bd" gracePeriod=2 Feb 16 14:29:21 crc kubenswrapper[4812]: E0216 14:29:21.888812 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:29:22 crc kubenswrapper[4812]: I0216 14:29:22.168966 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2f6kd" Feb 16 14:29:22 crc kubenswrapper[4812]: I0216 14:29:22.182986 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fc3242c-4670-42e1-be02-12cdac84dd0d-utilities\") pod \"6fc3242c-4670-42e1-be02-12cdac84dd0d\" (UID: \"6fc3242c-4670-42e1-be02-12cdac84dd0d\") " Feb 16 14:29:22 crc kubenswrapper[4812]: I0216 14:29:22.183267 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fc3242c-4670-42e1-be02-12cdac84dd0d-catalog-content\") pod \"6fc3242c-4670-42e1-be02-12cdac84dd0d\" (UID: \"6fc3242c-4670-42e1-be02-12cdac84dd0d\") " Feb 16 14:29:22 crc kubenswrapper[4812]: I0216 14:29:22.183493 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcrj5\" (UniqueName: \"kubernetes.io/projected/6fc3242c-4670-42e1-be02-12cdac84dd0d-kube-api-access-dcrj5\") pod \"6fc3242c-4670-42e1-be02-12cdac84dd0d\" (UID: \"6fc3242c-4670-42e1-be02-12cdac84dd0d\") " Feb 16 14:29:22 crc kubenswrapper[4812]: I0216 14:29:22.184795 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fc3242c-4670-42e1-be02-12cdac84dd0d-utilities" (OuterVolumeSpecName: "utilities") pod "6fc3242c-4670-42e1-be02-12cdac84dd0d" (UID: "6fc3242c-4670-42e1-be02-12cdac84dd0d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:29:22 crc kubenswrapper[4812]: I0216 14:29:22.197197 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fc3242c-4670-42e1-be02-12cdac84dd0d-kube-api-access-dcrj5" (OuterVolumeSpecName: "kube-api-access-dcrj5") pod "6fc3242c-4670-42e1-be02-12cdac84dd0d" (UID: "6fc3242c-4670-42e1-be02-12cdac84dd0d"). InnerVolumeSpecName "kube-api-access-dcrj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:29:22 crc kubenswrapper[4812]: I0216 14:29:22.218597 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fc3242c-4670-42e1-be02-12cdac84dd0d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6fc3242c-4670-42e1-be02-12cdac84dd0d" (UID: "6fc3242c-4670-42e1-be02-12cdac84dd0d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:29:22 crc kubenswrapper[4812]: I0216 14:29:22.285986 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcrj5\" (UniqueName: \"kubernetes.io/projected/6fc3242c-4670-42e1-be02-12cdac84dd0d-kube-api-access-dcrj5\") on node \"crc\" DevicePath \"\"" Feb 16 14:29:22 crc kubenswrapper[4812]: I0216 14:29:22.286027 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fc3242c-4670-42e1-be02-12cdac84dd0d-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:29:22 crc kubenswrapper[4812]: I0216 14:29:22.286040 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fc3242c-4670-42e1-be02-12cdac84dd0d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:29:22 crc kubenswrapper[4812]: I0216 14:29:22.655479 4812 generic.go:334] "Generic (PLEG): container finished" podID="6fc3242c-4670-42e1-be02-12cdac84dd0d" containerID="d2bc5190cb5a7c02a721772f87013da025915928d62c0fc9fdab43d4a60562bd" exitCode=0 Feb 16 14:29:22 crc kubenswrapper[4812]: I0216 14:29:22.655531 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2f6kd" Feb 16 14:29:22 crc kubenswrapper[4812]: I0216 14:29:22.655533 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2f6kd" event={"ID":"6fc3242c-4670-42e1-be02-12cdac84dd0d","Type":"ContainerDied","Data":"d2bc5190cb5a7c02a721772f87013da025915928d62c0fc9fdab43d4a60562bd"} Feb 16 14:29:22 crc kubenswrapper[4812]: I0216 14:29:22.656647 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2f6kd" event={"ID":"6fc3242c-4670-42e1-be02-12cdac84dd0d","Type":"ContainerDied","Data":"bbede98f7fa003dc666d555f6137f0e8aa121bd21e8112baba2d069b5e221487"} Feb 16 14:29:22 crc kubenswrapper[4812]: I0216 14:29:22.656727 4812 scope.go:117] "RemoveContainer" containerID="d2bc5190cb5a7c02a721772f87013da025915928d62c0fc9fdab43d4a60562bd" Feb 16 14:29:22 crc kubenswrapper[4812]: I0216 14:29:22.683713 4812 scope.go:117] "RemoveContainer" containerID="aeae09609cf7cde4b88009b5dbb50e9ea25376e5263c7ed4fed80dfd4ef74097" Feb 16 14:29:22 crc kubenswrapper[4812]: I0216 14:29:22.699772 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2f6kd"] Feb 16 14:29:22 crc kubenswrapper[4812]: I0216 14:29:22.713931 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2f6kd"] Feb 16 14:29:22 crc kubenswrapper[4812]: I0216 14:29:22.718104 4812 scope.go:117] "RemoveContainer" containerID="337e0ab0320cfabd0e0298a2854d48e439855f86cb0b54bc5e996ce03f4dd908" Feb 16 14:29:22 crc kubenswrapper[4812]: I0216 14:29:22.762562 4812 scope.go:117] "RemoveContainer" containerID="d2bc5190cb5a7c02a721772f87013da025915928d62c0fc9fdab43d4a60562bd" Feb 16 14:29:22 crc kubenswrapper[4812]: E0216 14:29:22.763030 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2bc5190cb5a7c02a721772f87013da025915928d62c0fc9fdab43d4a60562bd\": container with ID starting with d2bc5190cb5a7c02a721772f87013da025915928d62c0fc9fdab43d4a60562bd not found: ID does not exist" containerID="d2bc5190cb5a7c02a721772f87013da025915928d62c0fc9fdab43d4a60562bd" Feb 16 14:29:22 crc kubenswrapper[4812]: I0216 14:29:22.763073 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2bc5190cb5a7c02a721772f87013da025915928d62c0fc9fdab43d4a60562bd"} err="failed to get container status \"d2bc5190cb5a7c02a721772f87013da025915928d62c0fc9fdab43d4a60562bd\": rpc error: code = NotFound desc = could not find container \"d2bc5190cb5a7c02a721772f87013da025915928d62c0fc9fdab43d4a60562bd\": container with ID starting with d2bc5190cb5a7c02a721772f87013da025915928d62c0fc9fdab43d4a60562bd not found: ID does not exist" Feb 16 14:29:22 crc kubenswrapper[4812]: I0216 14:29:22.763100 4812 scope.go:117] "RemoveContainer" containerID="aeae09609cf7cde4b88009b5dbb50e9ea25376e5263c7ed4fed80dfd4ef74097" Feb 16 14:29:22 crc kubenswrapper[4812]: E0216 14:29:22.763416 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aeae09609cf7cde4b88009b5dbb50e9ea25376e5263c7ed4fed80dfd4ef74097\": container with ID starting with aeae09609cf7cde4b88009b5dbb50e9ea25376e5263c7ed4fed80dfd4ef74097 not found: ID does not exist" containerID="aeae09609cf7cde4b88009b5dbb50e9ea25376e5263c7ed4fed80dfd4ef74097" Feb 16 14:29:22 crc kubenswrapper[4812]: I0216 14:29:22.763476 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aeae09609cf7cde4b88009b5dbb50e9ea25376e5263c7ed4fed80dfd4ef74097"} err="failed to get container status \"aeae09609cf7cde4b88009b5dbb50e9ea25376e5263c7ed4fed80dfd4ef74097\": rpc error: code = NotFound desc = could not find container \"aeae09609cf7cde4b88009b5dbb50e9ea25376e5263c7ed4fed80dfd4ef74097\": container with ID starting with aeae09609cf7cde4b88009b5dbb50e9ea25376e5263c7ed4fed80dfd4ef74097 not found: ID does not exist" Feb 16 14:29:22 crc kubenswrapper[4812]: I0216 14:29:22.763509 4812 scope.go:117] "RemoveContainer" containerID="337e0ab0320cfabd0e0298a2854d48e439855f86cb0b54bc5e996ce03f4dd908" Feb 16 14:29:22 crc kubenswrapper[4812]: E0216 14:29:22.764122 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"337e0ab0320cfabd0e0298a2854d48e439855f86cb0b54bc5e996ce03f4dd908\": container with ID starting with 337e0ab0320cfabd0e0298a2854d48e439855f86cb0b54bc5e996ce03f4dd908 not found: ID does not exist" containerID="337e0ab0320cfabd0e0298a2854d48e439855f86cb0b54bc5e996ce03f4dd908" Feb 16 14:29:22 crc kubenswrapper[4812]: I0216 14:29:22.764274 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"337e0ab0320cfabd0e0298a2854d48e439855f86cb0b54bc5e996ce03f4dd908"} err="failed to get container status \"337e0ab0320cfabd0e0298a2854d48e439855f86cb0b54bc5e996ce03f4dd908\": rpc error: code = NotFound desc = could not find container \"337e0ab0320cfabd0e0298a2854d48e439855f86cb0b54bc5e996ce03f4dd908\": container with ID starting with 337e0ab0320cfabd0e0298a2854d48e439855f86cb0b54bc5e996ce03f4dd908 not found: ID does not exist" Feb 16 14:29:23 crc kubenswrapper[4812]: I0216 14:29:23.903224 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fc3242c-4670-42e1-be02-12cdac84dd0d" path="/var/lib/kubelet/pods/6fc3242c-4670-42e1-be02-12cdac84dd0d/volumes" Feb 16 14:29:32 crc kubenswrapper[4812]: I0216 14:29:32.837187 4812 generic.go:334] "Generic (PLEG): container finished" podID="c8483932-ac35-4fb8-a807-b4d899788c4c" containerID="61aca6a9e911fd0c038d861322ff41280b628f18d736ba00688b0b88c3fc7e12" exitCode=0 Feb 16 14:29:32 crc kubenswrapper[4812]: I0216 14:29:32.837811 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ndlqc/must-gather-drdhp" event={"ID":"c8483932-ac35-4fb8-a807-b4d899788c4c","Type":"ContainerDied","Data":"61aca6a9e911fd0c038d861322ff41280b628f18d736ba00688b0b88c3fc7e12"} Feb 16 14:29:32 crc kubenswrapper[4812]: I0216 14:29:32.839539 4812 scope.go:117] "RemoveContainer" containerID="61aca6a9e911fd0c038d861322ff41280b628f18d736ba00688b0b88c3fc7e12" Feb 16 14:29:32 crc kubenswrapper[4812]: I0216 14:29:32.840981 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bztp6"] Feb 16 14:29:32 crc kubenswrapper[4812]: E0216 14:29:32.843418 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fc3242c-4670-42e1-be02-12cdac84dd0d" containerName="extract-utilities" Feb 16 14:29:32 crc kubenswrapper[4812]: I0216 14:29:32.843483 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fc3242c-4670-42e1-be02-12cdac84dd0d" containerName="extract-utilities" Feb 16 14:29:32 crc kubenswrapper[4812]: E0216 14:29:32.843525 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fc3242c-4670-42e1-be02-12cdac84dd0d" containerName="registry-server" Feb 16 14:29:32 crc kubenswrapper[4812]: I0216 14:29:32.843539 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fc3242c-4670-42e1-be02-12cdac84dd0d" containerName="registry-server" Feb 16 14:29:32 crc kubenswrapper[4812]: E0216 14:29:32.843569 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fc3242c-4670-42e1-be02-12cdac84dd0d" containerName="extract-content" Feb 16 14:29:32 crc kubenswrapper[4812]: I0216 14:29:32.843583 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fc3242c-4670-42e1-be02-12cdac84dd0d" containerName="extract-content" Feb 16 14:29:32 crc kubenswrapper[4812]: I0216 14:29:32.844097 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fc3242c-4670-42e1-be02-12cdac84dd0d" containerName="registry-server" Feb 16 14:29:32 crc kubenswrapper[4812]: I0216 14:29:32.861753 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bztp6" Feb 16 14:29:32 crc kubenswrapper[4812]: I0216 14:29:32.870861 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bztp6"] Feb 16 14:29:32 crc kubenswrapper[4812]: I0216 14:29:32.997497 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbtb7\" (UniqueName: \"kubernetes.io/projected/d705acaf-d252-49c5-b340-3764d106ff60-kube-api-access-pbtb7\") pod \"community-operators-bztp6\" (UID: \"d705acaf-d252-49c5-b340-3764d106ff60\") " pod="openshift-marketplace/community-operators-bztp6" Feb 16 14:29:32 crc kubenswrapper[4812]: I0216 14:29:32.997660 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d705acaf-d252-49c5-b340-3764d106ff60-utilities\") pod \"community-operators-bztp6\" (UID: \"d705acaf-d252-49c5-b340-3764d106ff60\") " pod="openshift-marketplace/community-operators-bztp6" Feb 16 14:29:32 crc kubenswrapper[4812]: I0216 14:29:32.997937 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d705acaf-d252-49c5-b340-3764d106ff60-catalog-content\") pod \"community-operators-bztp6\" (UID: \"d705acaf-d252-49c5-b340-3764d106ff60\") " pod="openshift-marketplace/community-operators-bztp6" Feb 16 14:29:33 crc kubenswrapper[4812]: I0216 14:29:33.041977 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ndlqc_must-gather-drdhp_c8483932-ac35-4fb8-a807-b4d899788c4c/gather/0.log" Feb 16 14:29:33 crc kubenswrapper[4812]: I0216 14:29:33.100756 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbtb7\" (UniqueName: \"kubernetes.io/projected/d705acaf-d252-49c5-b340-3764d106ff60-kube-api-access-pbtb7\") pod \"community-operators-bztp6\" (UID: \"d705acaf-d252-49c5-b340-3764d106ff60\") " pod="openshift-marketplace/community-operators-bztp6" Feb 16 14:29:33 crc kubenswrapper[4812]: I0216 14:29:33.100854 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d705acaf-d252-49c5-b340-3764d106ff60-utilities\") pod \"community-operators-bztp6\" (UID: \"d705acaf-d252-49c5-b340-3764d106ff60\") " pod="openshift-marketplace/community-operators-bztp6" Feb 16 14:29:33 crc kubenswrapper[4812]: I0216 14:29:33.100931 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d705acaf-d252-49c5-b340-3764d106ff60-catalog-content\") pod \"community-operators-bztp6\" (UID: \"d705acaf-d252-49c5-b340-3764d106ff60\") " pod="openshift-marketplace/community-operators-bztp6" Feb 16 14:29:33 crc kubenswrapper[4812]: I0216 14:29:33.101550 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d705acaf-d252-49c5-b340-3764d106ff60-catalog-content\") pod \"community-operators-bztp6\" (UID: \"d705acaf-d252-49c5-b340-3764d106ff60\") " pod="openshift-marketplace/community-operators-bztp6" Feb 16 14:29:33 crc kubenswrapper[4812]: I0216 14:29:33.102100 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d705acaf-d252-49c5-b340-3764d106ff60-utilities\") pod \"community-operators-bztp6\" (UID: \"d705acaf-d252-49c5-b340-3764d106ff60\") " pod="openshift-marketplace/community-operators-bztp6" Feb 16 14:29:33 crc kubenswrapper[4812]: I0216 14:29:33.127363 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbtb7\" (UniqueName: \"kubernetes.io/projected/d705acaf-d252-49c5-b340-3764d106ff60-kube-api-access-pbtb7\") pod \"community-operators-bztp6\" (UID: \"d705acaf-d252-49c5-b340-3764d106ff60\") " pod="openshift-marketplace/community-operators-bztp6" Feb 16 14:29:33 crc kubenswrapper[4812]: I0216 14:29:33.202218 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bztp6" Feb 16 14:29:33 crc kubenswrapper[4812]: I0216 14:29:33.821246 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bztp6"] Feb 16 14:29:33 crc kubenswrapper[4812]: I0216 14:29:33.860345 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bztp6" event={"ID":"d705acaf-d252-49c5-b340-3764d106ff60","Type":"ContainerStarted","Data":"adca8437cd1574218788ea3b28e24099564095eba0c2ecf5343b691b7492f6e9"} Feb 16 14:29:33 crc kubenswrapper[4812]: E0216 14:29:33.883177 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:29:34 crc kubenswrapper[4812]: I0216 14:29:34.871369 4812 generic.go:334] "Generic (PLEG): container finished" podID="d705acaf-d252-49c5-b340-3764d106ff60" containerID="9d4f59174ef549c162a60678fd219c3c1e6f98dc36e599bbc1f27a963302bd2e" exitCode=0 Feb 16 14:29:34 crc kubenswrapper[4812]: I0216 14:29:34.871660 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bztp6" event={"ID":"d705acaf-d252-49c5-b340-3764d106ff60","Type":"ContainerDied","Data":"9d4f59174ef549c162a60678fd219c3c1e6f98dc36e599bbc1f27a963302bd2e"} Feb 16 14:29:35 crc kubenswrapper[4812]: I0216 14:29:35.900236 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bztp6" event={"ID":"d705acaf-d252-49c5-b340-3764d106ff60","Type":"ContainerStarted","Data":"eaacc86821b1b34b308d943669cb82d862d7b184567f5340841f81191d54058d"} Feb 16 14:29:36 crc kubenswrapper[4812]: I0216 14:29:36.911995 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bztp6" event={"ID":"d705acaf-d252-49c5-b340-3764d106ff60","Type":"ContainerDied","Data":"eaacc86821b1b34b308d943669cb82d862d7b184567f5340841f81191d54058d"} Feb 16 14:29:36 crc kubenswrapper[4812]: I0216 14:29:36.911632 4812 generic.go:334] "Generic (PLEG): container finished" podID="d705acaf-d252-49c5-b340-3764d106ff60" containerID="eaacc86821b1b34b308d943669cb82d862d7b184567f5340841f81191d54058d" exitCode=0 Feb 16 14:29:37 crc kubenswrapper[4812]: I0216 14:29:37.933109 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bztp6" event={"ID":"d705acaf-d252-49c5-b340-3764d106ff60","Type":"ContainerStarted","Data":"6ab1dd8ada3e04451d2a5c1623cda46f4ca959cd703606dee564dcaf5f1b4002"} Feb 16 14:29:37 crc kubenswrapper[4812]: I0216 14:29:37.963804 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bztp6" podStartSLOduration=3.528782492 podStartE2EDuration="5.96376938s" podCreationTimestamp="2026-02-16 14:29:32 +0000 UTC" firstStartedPulling="2026-02-16 14:29:34.874543249 +0000 UTC m=+3463.938873940" lastFinishedPulling="2026-02-16 14:29:37.309530087 +0000 UTC m=+3466.373860828" observedRunningTime="2026-02-16 14:29:37.957182262 +0000 UTC m=+3467.021512973" watchObservedRunningTime="2026-02-16 14:29:37.96376938 +0000 UTC m=+3467.028100091" Feb 16 14:29:41 crc kubenswrapper[4812]: I0216 14:29:41.526396 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ndlqc/must-gather-drdhp"] Feb 16 14:29:41 crc kubenswrapper[4812]: I0216 14:29:41.528033 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-ndlqc/must-gather-drdhp" podUID="c8483932-ac35-4fb8-a807-b4d899788c4c" containerName="copy" containerID="cri-o://6ba1f54f8a3694595c2809208f07b8e593e95b58c43f72632689248e6e243852" gracePeriod=2 Feb 16 14:29:41 crc kubenswrapper[4812]: I0216 14:29:41.535848 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ndlqc/must-gather-drdhp"] Feb 16 14:29:41 crc kubenswrapper[4812]: I0216 14:29:41.980841 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ndlqc_must-gather-drdhp_c8483932-ac35-4fb8-a807-b4d899788c4c/copy/0.log" Feb 16 14:29:41 crc kubenswrapper[4812]: I0216 14:29:41.981555 4812 generic.go:334] "Generic (PLEG): container finished" podID="c8483932-ac35-4fb8-a807-b4d899788c4c" containerID="6ba1f54f8a3694595c2809208f07b8e593e95b58c43f72632689248e6e243852" exitCode=143 Feb 16 14:29:42 crc kubenswrapper[4812]: I0216 14:29:42.700927 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ndlqc_must-gather-drdhp_c8483932-ac35-4fb8-a807-b4d899788c4c/copy/0.log" Feb 16 14:29:42 crc kubenswrapper[4812]: I0216 14:29:42.701788 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ndlqc/must-gather-drdhp" Feb 16 14:29:42 crc kubenswrapper[4812]: I0216 14:29:42.870224 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c8483932-ac35-4fb8-a807-b4d899788c4c-must-gather-output\") pod \"c8483932-ac35-4fb8-a807-b4d899788c4c\" (UID: \"c8483932-ac35-4fb8-a807-b4d899788c4c\") " Feb 16 14:29:42 crc kubenswrapper[4812]: I0216 14:29:42.870342 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56f9c\" (UniqueName: \"kubernetes.io/projected/c8483932-ac35-4fb8-a807-b4d899788c4c-kube-api-access-56f9c\") pod \"c8483932-ac35-4fb8-a807-b4d899788c4c\" (UID: \"c8483932-ac35-4fb8-a807-b4d899788c4c\") " Feb 16 14:29:42 crc kubenswrapper[4812]: I0216 14:29:42.878655 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8483932-ac35-4fb8-a807-b4d899788c4c-kube-api-access-56f9c" (OuterVolumeSpecName: "kube-api-access-56f9c") pod "c8483932-ac35-4fb8-a807-b4d899788c4c" (UID: "c8483932-ac35-4fb8-a807-b4d899788c4c"). InnerVolumeSpecName "kube-api-access-56f9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:29:42 crc kubenswrapper[4812]: I0216 14:29:42.974587 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56f9c\" (UniqueName: \"kubernetes.io/projected/c8483932-ac35-4fb8-a807-b4d899788c4c-kube-api-access-56f9c\") on node \"crc\" DevicePath \"\"" Feb 16 14:29:42 crc kubenswrapper[4812]: I0216 14:29:42.993790 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ndlqc_must-gather-drdhp_c8483932-ac35-4fb8-a807-b4d899788c4c/copy/0.log" Feb 16 14:29:42 crc kubenswrapper[4812]: I0216 14:29:42.994140 4812 scope.go:117] "RemoveContainer" containerID="6ba1f54f8a3694595c2809208f07b8e593e95b58c43f72632689248e6e243852" Feb 16 14:29:42 crc kubenswrapper[4812]: I0216 14:29:42.994272 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ndlqc/must-gather-drdhp" Feb 16 14:29:43 crc kubenswrapper[4812]: I0216 14:29:43.019084 4812 scope.go:117] "RemoveContainer" containerID="61aca6a9e911fd0c038d861322ff41280b628f18d736ba00688b0b88c3fc7e12" Feb 16 14:29:43 crc kubenswrapper[4812]: I0216 14:29:43.024296 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8483932-ac35-4fb8-a807-b4d899788c4c-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "c8483932-ac35-4fb8-a807-b4d899788c4c" (UID: "c8483932-ac35-4fb8-a807-b4d899788c4c"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:29:43 crc kubenswrapper[4812]: I0216 14:29:43.077822 4812 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c8483932-ac35-4fb8-a807-b4d899788c4c-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 16 14:29:43 crc kubenswrapper[4812]: I0216 14:29:43.202644 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bztp6" Feb 16 14:29:43 crc kubenswrapper[4812]: I0216 14:29:43.204153 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bztp6" Feb 16 14:29:43 crc kubenswrapper[4812]: I0216 14:29:43.270292 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bztp6" Feb 16 14:29:43 crc kubenswrapper[4812]: I0216 14:29:43.899227 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8483932-ac35-4fb8-a807-b4d899788c4c" path="/var/lib/kubelet/pods/c8483932-ac35-4fb8-a807-b4d899788c4c/volumes" Feb 16 14:29:44 crc kubenswrapper[4812]: I0216 14:29:44.096931 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bztp6" Feb 16 14:29:44 crc kubenswrapper[4812]: I0216 14:29:44.157875 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bztp6"] Feb 16 14:29:44 crc kubenswrapper[4812]: I0216 14:29:44.549261 4812 patch_prober.go:28] interesting pod/machine-config-daemon-c6mn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:29:44 crc kubenswrapper[4812]: I0216 14:29:44.549717 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:29:44 crc kubenswrapper[4812]: I0216 14:29:44.549785 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" Feb 16 14:29:44 crc kubenswrapper[4812]: I0216 14:29:44.550902 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c5f58f6a974f79b6081c75e064f880be69c771923e683ab20b29c5f39942ca14"} pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 14:29:44 crc kubenswrapper[4812]: I0216 14:29:44.550998 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" containerID="cri-o://c5f58f6a974f79b6081c75e064f880be69c771923e683ab20b29c5f39942ca14" gracePeriod=600 Feb 16 14:29:45 crc kubenswrapper[4812]: I0216 14:29:45.119393 4812 generic.go:334] "Generic (PLEG): container finished" podID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerID="c5f58f6a974f79b6081c75e064f880be69c771923e683ab20b29c5f39942ca14" exitCode=0 Feb 16 14:29:45 crc kubenswrapper[4812]: I0216 14:29:45.120595 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" event={"ID":"3c55e49a-a30d-4950-a690-c33d9f8a31e0","Type":"ContainerDied","Data":"c5f58f6a974f79b6081c75e064f880be69c771923e683ab20b29c5f39942ca14"} Feb 16 14:29:45 crc kubenswrapper[4812]: I0216 14:29:45.120621 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" event={"ID":"3c55e49a-a30d-4950-a690-c33d9f8a31e0","Type":"ContainerStarted","Data":"269fbf67f93ebc3710ed9092cae6b928acc7e4620f9ea69535e0ad57766470d5"} Feb 16 14:29:45 crc kubenswrapper[4812]: I0216 14:29:45.120638 4812 scope.go:117] "RemoveContainer" containerID="4d303775d5b0eacf35f8cd038a2b3e13d57cfb093693244a2a705750574fe85d" Feb 16 14:29:46 crc kubenswrapper[4812]: I0216 14:29:46.139031 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bztp6" podUID="d705acaf-d252-49c5-b340-3764d106ff60" containerName="registry-server" containerID="cri-o://6ab1dd8ada3e04451d2a5c1623cda46f4ca959cd703606dee564dcaf5f1b4002" gracePeriod=2 Feb 16 14:29:46 crc kubenswrapper[4812]: I0216 14:29:46.728475 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bztp6" Feb 16 14:29:46 crc kubenswrapper[4812]: I0216 14:29:46.768412 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d705acaf-d252-49c5-b340-3764d106ff60-utilities\") pod \"d705acaf-d252-49c5-b340-3764d106ff60\" (UID: \"d705acaf-d252-49c5-b340-3764d106ff60\") " Feb 16 14:29:46 crc kubenswrapper[4812]: I0216 14:29:46.768612 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d705acaf-d252-49c5-b340-3764d106ff60-catalog-content\") pod \"d705acaf-d252-49c5-b340-3764d106ff60\" (UID: \"d705acaf-d252-49c5-b340-3764d106ff60\") " Feb 16 14:29:46 crc kubenswrapper[4812]: I0216 14:29:46.768803 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbtb7\" (UniqueName: \"kubernetes.io/projected/d705acaf-d252-49c5-b340-3764d106ff60-kube-api-access-pbtb7\") pod \"d705acaf-d252-49c5-b340-3764d106ff60\" (UID: \"d705acaf-d252-49c5-b340-3764d106ff60\") " Feb 16 14:29:46 crc kubenswrapper[4812]: I0216 14:29:46.769849 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d705acaf-d252-49c5-b340-3764d106ff60-utilities" (OuterVolumeSpecName: "utilities") pod "d705acaf-d252-49c5-b340-3764d106ff60" (UID: "d705acaf-d252-49c5-b340-3764d106ff60"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:29:46 crc kubenswrapper[4812]: I0216 14:29:46.775289 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d705acaf-d252-49c5-b340-3764d106ff60-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:29:46 crc kubenswrapper[4812]: I0216 14:29:46.775474 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d705acaf-d252-49c5-b340-3764d106ff60-kube-api-access-pbtb7" (OuterVolumeSpecName: "kube-api-access-pbtb7") pod "d705acaf-d252-49c5-b340-3764d106ff60" (UID: "d705acaf-d252-49c5-b340-3764d106ff60"). InnerVolumeSpecName "kube-api-access-pbtb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:29:46 crc kubenswrapper[4812]: I0216 14:29:46.841389 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d705acaf-d252-49c5-b340-3764d106ff60-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d705acaf-d252-49c5-b340-3764d106ff60" (UID: "d705acaf-d252-49c5-b340-3764d106ff60"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:29:46 crc kubenswrapper[4812]: I0216 14:29:46.876772 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d705acaf-d252-49c5-b340-3764d106ff60-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:29:46 crc kubenswrapper[4812]: I0216 14:29:46.876885 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbtb7\" (UniqueName: \"kubernetes.io/projected/d705acaf-d252-49c5-b340-3764d106ff60-kube-api-access-pbtb7\") on node \"crc\" DevicePath \"\"" Feb 16 14:29:47 crc kubenswrapper[4812]: I0216 14:29:47.155529 4812 generic.go:334] "Generic (PLEG): container finished" podID="d705acaf-d252-49c5-b340-3764d106ff60" containerID="6ab1dd8ada3e04451d2a5c1623cda46f4ca959cd703606dee564dcaf5f1b4002" exitCode=0 Feb 16 14:29:47 crc kubenswrapper[4812]: I0216 14:29:47.155650 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bztp6" event={"ID":"d705acaf-d252-49c5-b340-3764d106ff60","Type":"ContainerDied","Data":"6ab1dd8ada3e04451d2a5c1623cda46f4ca959cd703606dee564dcaf5f1b4002"} Feb 16 14:29:47 crc kubenswrapper[4812]: I0216 14:29:47.156770 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bztp6" event={"ID":"d705acaf-d252-49c5-b340-3764d106ff60","Type":"ContainerDied","Data":"adca8437cd1574218788ea3b28e24099564095eba0c2ecf5343b691b7492f6e9"} Feb 16 14:29:47 crc kubenswrapper[4812]: I0216 14:29:47.155673 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bztp6" Feb 16 14:29:47 crc kubenswrapper[4812]: I0216 14:29:47.156852 4812 scope.go:117] "RemoveContainer" containerID="6ab1dd8ada3e04451d2a5c1623cda46f4ca959cd703606dee564dcaf5f1b4002" Feb 16 14:29:47 crc kubenswrapper[4812]: I0216 14:29:47.204550 4812 scope.go:117] "RemoveContainer" containerID="eaacc86821b1b34b308d943669cb82d862d7b184567f5340841f81191d54058d" Feb 16 14:29:47 crc kubenswrapper[4812]: I0216 14:29:47.206266 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bztp6"] Feb 16 14:29:47 crc kubenswrapper[4812]: I0216 14:29:47.221010 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bztp6"] Feb 16 14:29:47 crc kubenswrapper[4812]: I0216 14:29:47.231477 4812 scope.go:117] "RemoveContainer" containerID="9d4f59174ef549c162a60678fd219c3c1e6f98dc36e599bbc1f27a963302bd2e" Feb 16 14:29:47 crc kubenswrapper[4812]: I0216 14:29:47.291040 4812 scope.go:117] "RemoveContainer" containerID="6ab1dd8ada3e04451d2a5c1623cda46f4ca959cd703606dee564dcaf5f1b4002" Feb 16 14:29:47 crc kubenswrapper[4812]: E0216 14:29:47.292319 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ab1dd8ada3e04451d2a5c1623cda46f4ca959cd703606dee564dcaf5f1b4002\": container with ID starting with 6ab1dd8ada3e04451d2a5c1623cda46f4ca959cd703606dee564dcaf5f1b4002 not found: ID does not exist" containerID="6ab1dd8ada3e04451d2a5c1623cda46f4ca959cd703606dee564dcaf5f1b4002" Feb 16 14:29:47 crc kubenswrapper[4812]: I0216 14:29:47.292512 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ab1dd8ada3e04451d2a5c1623cda46f4ca959cd703606dee564dcaf5f1b4002"} err="failed to get container status \"6ab1dd8ada3e04451d2a5c1623cda46f4ca959cd703606dee564dcaf5f1b4002\": rpc error: code = NotFound desc = could not find container \"6ab1dd8ada3e04451d2a5c1623cda46f4ca959cd703606dee564dcaf5f1b4002\": container with ID starting with 6ab1dd8ada3e04451d2a5c1623cda46f4ca959cd703606dee564dcaf5f1b4002 not found: ID does not exist" Feb 16 14:29:47 crc kubenswrapper[4812]: I0216 14:29:47.292676 4812 scope.go:117] "RemoveContainer" containerID="eaacc86821b1b34b308d943669cb82d862d7b184567f5340841f81191d54058d" Feb 16 14:29:47 crc kubenswrapper[4812]: E0216 14:29:47.293165 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaacc86821b1b34b308d943669cb82d862d7b184567f5340841f81191d54058d\": container with ID starting with eaacc86821b1b34b308d943669cb82d862d7b184567f5340841f81191d54058d not found: ID does not exist" containerID="eaacc86821b1b34b308d943669cb82d862d7b184567f5340841f81191d54058d" Feb 16 14:29:47 crc kubenswrapper[4812]: I0216 14:29:47.293200 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaacc86821b1b34b308d943669cb82d862d7b184567f5340841f81191d54058d"} err="failed to get container status \"eaacc86821b1b34b308d943669cb82d862d7b184567f5340841f81191d54058d\": rpc error: code = NotFound desc = could not find container \"eaacc86821b1b34b308d943669cb82d862d7b184567f5340841f81191d54058d\": container with ID starting with eaacc86821b1b34b308d943669cb82d862d7b184567f5340841f81191d54058d not found: ID does not exist" Feb 16 14:29:47 crc kubenswrapper[4812]: I0216 14:29:47.293221 4812 scope.go:117] "RemoveContainer" containerID="9d4f59174ef549c162a60678fd219c3c1e6f98dc36e599bbc1f27a963302bd2e" Feb 16 14:29:47 crc kubenswrapper[4812]: E0216 14:29:47.293553 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d4f59174ef549c162a60678fd219c3c1e6f98dc36e599bbc1f27a963302bd2e\": container with ID starting with 9d4f59174ef549c162a60678fd219c3c1e6f98dc36e599bbc1f27a963302bd2e not found: ID does not exist" containerID="9d4f59174ef549c162a60678fd219c3c1e6f98dc36e599bbc1f27a963302bd2e" Feb 16 14:29:47 crc kubenswrapper[4812]: I0216 14:29:47.293589 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d4f59174ef549c162a60678fd219c3c1e6f98dc36e599bbc1f27a963302bd2e"} err="failed to get container status \"9d4f59174ef549c162a60678fd219c3c1e6f98dc36e599bbc1f27a963302bd2e\": rpc error: code = NotFound desc = could not find container \"9d4f59174ef549c162a60678fd219c3c1e6f98dc36e599bbc1f27a963302bd2e\": container with ID starting with 9d4f59174ef549c162a60678fd219c3c1e6f98dc36e599bbc1f27a963302bd2e not found: ID does not exist" Feb 16 14:29:47 crc kubenswrapper[4812]: I0216 14:29:47.893423 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d705acaf-d252-49c5-b340-3764d106ff60" path="/var/lib/kubelet/pods/d705acaf-d252-49c5-b340-3764d106ff60/volumes" Feb 16 14:29:48 crc kubenswrapper[4812]: E0216 14:29:48.880243 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:29:59 crc kubenswrapper[4812]: E0216 14:29:59.882047 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:30:00 crc kubenswrapper[4812]: I0216 14:30:00.169618 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520870-9rdgx"] Feb 16 14:30:00 crc kubenswrapper[4812]: E0216 14:30:00.170616 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8483932-ac35-4fb8-a807-b4d899788c4c" containerName="gather" Feb 16 14:30:00 crc kubenswrapper[4812]: I0216 14:30:00.170663 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8483932-ac35-4fb8-a807-b4d899788c4c" containerName="gather" Feb 16 14:30:00 crc kubenswrapper[4812]: E0216 14:30:00.170705 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d705acaf-d252-49c5-b340-3764d106ff60" containerName="extract-content" Feb 16 14:30:00 crc kubenswrapper[4812]: I0216 14:30:00.170723 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d705acaf-d252-49c5-b340-3764d106ff60" containerName="extract-content" Feb 16 14:30:00 crc kubenswrapper[4812]: E0216 14:30:00.170798 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8483932-ac35-4fb8-a807-b4d899788c4c" containerName="copy" Feb 16 14:30:00 crc kubenswrapper[4812]: I0216 14:30:00.170816 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8483932-ac35-4fb8-a807-b4d899788c4c" containerName="copy" Feb 16 14:30:00 crc kubenswrapper[4812]: E0216 14:30:00.170852 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d705acaf-d252-49c5-b340-3764d106ff60" containerName="registry-server" Feb 16 14:30:00 crc kubenswrapper[4812]: I0216 14:30:00.170868 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d705acaf-d252-49c5-b340-3764d106ff60" containerName="registry-server" Feb 16 14:30:00 crc kubenswrapper[4812]: E0216 14:30:00.170921 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d705acaf-d252-49c5-b340-3764d106ff60" containerName="extract-utilities" Feb 16 14:30:00 crc kubenswrapper[4812]: I0216 14:30:00.170940 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d705acaf-d252-49c5-b340-3764d106ff60" containerName="extract-utilities" Feb 16 14:30:00 crc kubenswrapper[4812]: I0216 14:30:00.171488 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8483932-ac35-4fb8-a807-b4d899788c4c" containerName="gather" Feb 16 14:30:00 crc kubenswrapper[4812]: I0216 14:30:00.171534 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="d705acaf-d252-49c5-b340-3764d106ff60" containerName="registry-server" Feb 16 14:30:00 crc kubenswrapper[4812]: I0216 14:30:00.171569 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8483932-ac35-4fb8-a807-b4d899788c4c" containerName="copy" Feb 16 14:30:00 crc kubenswrapper[4812]: I0216 14:30:00.173270 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520870-9rdgx" Feb 16 14:30:00 crc kubenswrapper[4812]: I0216 14:30:00.178972 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520870-9rdgx"] Feb 16 14:30:00 crc kubenswrapper[4812]: I0216 14:30:00.179835 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 14:30:00 crc kubenswrapper[4812]: I0216 14:30:00.179854 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 14:30:00 crc kubenswrapper[4812]: I0216 14:30:00.305706 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eaa8cb8d-958b-4659-9680-aa4c238beaea-secret-volume\") pod \"collect-profiles-29520870-9rdgx\" (UID: \"eaa8cb8d-958b-4659-9680-aa4c238beaea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520870-9rdgx" Feb 16 14:30:00 crc kubenswrapper[4812]: I0216 14:30:00.305834 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eaa8cb8d-958b-4659-9680-aa4c238beaea-config-volume\") pod \"collect-profiles-29520870-9rdgx\" (UID: \"eaa8cb8d-958b-4659-9680-aa4c238beaea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520870-9rdgx" Feb 16 14:30:00 crc kubenswrapper[4812]: I0216 14:30:00.305934 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwpvx\" (UniqueName: \"kubernetes.io/projected/eaa8cb8d-958b-4659-9680-aa4c238beaea-kube-api-access-xwpvx\") pod \"collect-profiles-29520870-9rdgx\" (UID: \"eaa8cb8d-958b-4659-9680-aa4c238beaea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520870-9rdgx" Feb 16 14:30:00 crc kubenswrapper[4812]: I0216 14:30:00.407320 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwpvx\" (UniqueName: \"kubernetes.io/projected/eaa8cb8d-958b-4659-9680-aa4c238beaea-kube-api-access-xwpvx\") pod \"collect-profiles-29520870-9rdgx\" (UID: \"eaa8cb8d-958b-4659-9680-aa4c238beaea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520870-9rdgx" Feb 16 14:30:00 crc kubenswrapper[4812]: I0216 14:30:00.407545 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eaa8cb8d-958b-4659-9680-aa4c238beaea-secret-volume\") pod \"collect-profiles-29520870-9rdgx\" (UID: \"eaa8cb8d-958b-4659-9680-aa4c238beaea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520870-9rdgx" Feb 16 14:30:00 crc kubenswrapper[4812]: I0216 14:30:00.407615 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eaa8cb8d-958b-4659-9680-aa4c238beaea-config-volume\") pod \"collect-profiles-29520870-9rdgx\" (UID: \"eaa8cb8d-958b-4659-9680-aa4c238beaea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520870-9rdgx" Feb 16 14:30:00 crc kubenswrapper[4812]: I0216 14:30:00.408736 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eaa8cb8d-958b-4659-9680-aa4c238beaea-config-volume\") pod \"collect-profiles-29520870-9rdgx\" (UID: \"eaa8cb8d-958b-4659-9680-aa4c238beaea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520870-9rdgx" Feb 16 14:30:00 crc kubenswrapper[4812]: I0216 14:30:00.413424 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eaa8cb8d-958b-4659-9680-aa4c238beaea-secret-volume\") pod \"collect-profiles-29520870-9rdgx\" (UID: \"eaa8cb8d-958b-4659-9680-aa4c238beaea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520870-9rdgx" Feb 16 14:30:00 crc kubenswrapper[4812]: I0216 14:30:00.426048 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwpvx\" (UniqueName: \"kubernetes.io/projected/eaa8cb8d-958b-4659-9680-aa4c238beaea-kube-api-access-xwpvx\") pod \"collect-profiles-29520870-9rdgx\" (UID: \"eaa8cb8d-958b-4659-9680-aa4c238beaea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520870-9rdgx" Feb 16 14:30:00 crc kubenswrapper[4812]: I0216 14:30:00.520241 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520870-9rdgx" Feb 16 14:30:01 crc kubenswrapper[4812]: I0216 14:30:01.018683 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520870-9rdgx"] Feb 16 14:30:01 crc kubenswrapper[4812]: I0216 14:30:01.310914 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520870-9rdgx" event={"ID":"eaa8cb8d-958b-4659-9680-aa4c238beaea","Type":"ContainerStarted","Data":"c93014fe0bc47537c78bd6c0dee5047e9f146aa3420bfe05675064f97a11f562"} Feb 16 14:30:01 crc kubenswrapper[4812]: I0216 14:30:01.311279 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520870-9rdgx" event={"ID":"eaa8cb8d-958b-4659-9680-aa4c238beaea","Type":"ContainerStarted","Data":"87987253dffcebf9ef0e0c180a9a2b1bb8daec9aad77f8619c027974b643f9a5"} Feb 16 14:30:01 crc kubenswrapper[4812]: I0216 14:30:01.324959 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29520870-9rdgx" podStartSLOduration=1.324925788 podStartE2EDuration="1.324925788s" podCreationTimestamp="2026-02-16 14:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:30:01.322131218 +0000 UTC m=+3490.386461939" watchObservedRunningTime="2026-02-16 14:30:01.324925788 +0000 UTC m=+3490.389256489" Feb 16 14:30:02 crc kubenswrapper[4812]: I0216 14:30:02.321713 4812 generic.go:334] "Generic (PLEG): container finished" podID="eaa8cb8d-958b-4659-9680-aa4c238beaea" containerID="c93014fe0bc47537c78bd6c0dee5047e9f146aa3420bfe05675064f97a11f562" exitCode=0 Feb 16 14:30:02 crc kubenswrapper[4812]: I0216 14:30:02.321756 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520870-9rdgx" event={"ID":"eaa8cb8d-958b-4659-9680-aa4c238beaea","Type":"ContainerDied","Data":"c93014fe0bc47537c78bd6c0dee5047e9f146aa3420bfe05675064f97a11f562"} Feb 16 14:30:03 crc kubenswrapper[4812]: I0216 14:30:03.778212 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520870-9rdgx" Feb 16 14:30:03 crc kubenswrapper[4812]: I0216 14:30:03.899990 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eaa8cb8d-958b-4659-9680-aa4c238beaea-config-volume\") pod \"eaa8cb8d-958b-4659-9680-aa4c238beaea\" (UID: \"eaa8cb8d-958b-4659-9680-aa4c238beaea\") " Feb 16 14:30:03 crc kubenswrapper[4812]: I0216 14:30:03.900564 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eaa8cb8d-958b-4659-9680-aa4c238beaea-secret-volume\") pod \"eaa8cb8d-958b-4659-9680-aa4c238beaea\" (UID: \"eaa8cb8d-958b-4659-9680-aa4c238beaea\") " Feb 16 14:30:03 crc kubenswrapper[4812]: I0216 14:30:03.901114 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaa8cb8d-958b-4659-9680-aa4c238beaea-config-volume" (OuterVolumeSpecName: "config-volume") pod "eaa8cb8d-958b-4659-9680-aa4c238beaea" (UID: "eaa8cb8d-958b-4659-9680-aa4c238beaea"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:30:03 crc kubenswrapper[4812]: I0216 14:30:03.901329 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwpvx\" (UniqueName: \"kubernetes.io/projected/eaa8cb8d-958b-4659-9680-aa4c238beaea-kube-api-access-xwpvx\") pod \"eaa8cb8d-958b-4659-9680-aa4c238beaea\" (UID: \"eaa8cb8d-958b-4659-9680-aa4c238beaea\") " Feb 16 14:30:03 crc kubenswrapper[4812]: I0216 14:30:03.901884 4812 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eaa8cb8d-958b-4659-9680-aa4c238beaea-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 14:30:03 crc kubenswrapper[4812]: I0216 14:30:03.909739 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa8cb8d-958b-4659-9680-aa4c238beaea-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "eaa8cb8d-958b-4659-9680-aa4c238beaea" (UID: "eaa8cb8d-958b-4659-9680-aa4c238beaea"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:30:03 crc kubenswrapper[4812]: I0216 14:30:03.909883 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaa8cb8d-958b-4659-9680-aa4c238beaea-kube-api-access-xwpvx" (OuterVolumeSpecName: "kube-api-access-xwpvx") pod "eaa8cb8d-958b-4659-9680-aa4c238beaea" (UID: "eaa8cb8d-958b-4659-9680-aa4c238beaea"). InnerVolumeSpecName "kube-api-access-xwpvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:30:04 crc kubenswrapper[4812]: I0216 14:30:04.003801 4812 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eaa8cb8d-958b-4659-9680-aa4c238beaea-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 14:30:04 crc kubenswrapper[4812]: I0216 14:30:04.003854 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwpvx\" (UniqueName: \"kubernetes.io/projected/eaa8cb8d-958b-4659-9680-aa4c238beaea-kube-api-access-xwpvx\") on node \"crc\" DevicePath \"\"" Feb 16 14:30:04 crc kubenswrapper[4812]: I0216 14:30:04.348256 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520870-9rdgx" event={"ID":"eaa8cb8d-958b-4659-9680-aa4c238beaea","Type":"ContainerDied","Data":"87987253dffcebf9ef0e0c180a9a2b1bb8daec9aad77f8619c027974b643f9a5"} Feb 16 14:30:04 crc kubenswrapper[4812]: I0216 14:30:04.348313 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87987253dffcebf9ef0e0c180a9a2b1bb8daec9aad77f8619c027974b643f9a5" Feb 16 14:30:04 crc kubenswrapper[4812]: I0216 14:30:04.348331 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520870-9rdgx" Feb 16 14:30:04 crc kubenswrapper[4812]: I0216 14:30:04.434334 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520825-57tb9"] Feb 16 14:30:04 crc kubenswrapper[4812]: I0216 14:30:04.444367 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520825-57tb9"] Feb 16 14:30:05 crc kubenswrapper[4812]: I0216 14:30:05.897160 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec93766a-8778-44ae-a75d-b348dbb218e5" path="/var/lib/kubelet/pods/ec93766a-8778-44ae-a75d-b348dbb218e5/volumes" Feb 16 14:30:09 crc kubenswrapper[4812]: I0216 14:30:09.636588 4812 scope.go:117] "RemoveContainer" containerID="b53207774a057ac0da7d2b36f4cd961b5843e767ee21da0f23d774e3f456b592" Feb 16 14:30:14 crc kubenswrapper[4812]: E0216 14:30:14.882741 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:30:25 crc kubenswrapper[4812]: E0216 14:30:25.946634 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:30:40 crc kubenswrapper[4812]: I0216 14:30:40.881316 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 14:30:41 crc kubenswrapper[4812]: E0216 14:30:41.008950 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 14:30:41 crc kubenswrapper[4812]: E0216 14:30:41.009007 4812 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 14:30:41 crc kubenswrapper[4812]: E0216 14:30:41.009161 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mq54s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-krnzs_openstack(a7d4eae6-781f-4675-a6c3-ee0f1589c735): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 14:30:41 crc kubenswrapper[4812]: E0216 14:30:41.010388 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:30:52 crc kubenswrapper[4812]: E0216 14:30:52.882555 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:31:03 crc kubenswrapper[4812]: E0216 14:31:03.882097 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:31:16 crc kubenswrapper[4812]: E0216 14:31:16.885389 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:31:27 crc kubenswrapper[4812]: E0216 14:31:27.881844 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:31:39 crc kubenswrapper[4812]: E0216 14:31:39.884819 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:31:44 crc kubenswrapper[4812]: I0216 14:31:44.549337 4812 patch_prober.go:28] interesting pod/machine-config-daemon-c6mn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:31:44 crc kubenswrapper[4812]: I0216 14:31:44.549894 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:31:51 crc kubenswrapper[4812]: E0216 14:31:51.893982 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:31:55 crc kubenswrapper[4812]: I0216 14:31:55.862902 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-shl52"] Feb 16 14:31:55 crc kubenswrapper[4812]: E0216 14:31:55.865263 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaa8cb8d-958b-4659-9680-aa4c238beaea" containerName="collect-profiles" Feb 16 14:31:55 crc kubenswrapper[4812]: I0216 14:31:55.865373 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaa8cb8d-958b-4659-9680-aa4c238beaea" containerName="collect-profiles" Feb 16 14:31:55 crc kubenswrapper[4812]: I0216 14:31:55.865736 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaa8cb8d-958b-4659-9680-aa4c238beaea" containerName="collect-profiles" Feb 16 14:31:55 crc kubenswrapper[4812]: I0216 14:31:55.868283 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-shl52" Feb 16 14:31:55 crc kubenswrapper[4812]: I0216 14:31:55.903802 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-shl52"] Feb 16 14:31:55 crc kubenswrapper[4812]: I0216 14:31:55.930868 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44eeb9b0-40ec-4fa4-8dbe-d7444e3be714-catalog-content\") pod \"redhat-operators-shl52\" (UID: \"44eeb9b0-40ec-4fa4-8dbe-d7444e3be714\") " pod="openshift-marketplace/redhat-operators-shl52" Feb 16 14:31:55 crc kubenswrapper[4812]: I0216 14:31:55.930922 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44eeb9b0-40ec-4fa4-8dbe-d7444e3be714-utilities\") pod \"redhat-operators-shl52\" (UID: \"44eeb9b0-40ec-4fa4-8dbe-d7444e3be714\") " pod="openshift-marketplace/redhat-operators-shl52" Feb 16 14:31:55 crc kubenswrapper[4812]: I0216 14:31:55.931074 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsvbx\" (UniqueName: \"kubernetes.io/projected/44eeb9b0-40ec-4fa4-8dbe-d7444e3be714-kube-api-access-dsvbx\") pod \"redhat-operators-shl52\" (UID: \"44eeb9b0-40ec-4fa4-8dbe-d7444e3be714\") " pod="openshift-marketplace/redhat-operators-shl52" Feb 16 14:31:56 crc kubenswrapper[4812]: I0216 14:31:56.032495 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44eeb9b0-40ec-4fa4-8dbe-d7444e3be714-catalog-content\") pod \"redhat-operators-shl52\" (UID: \"44eeb9b0-40ec-4fa4-8dbe-d7444e3be714\") " pod="openshift-marketplace/redhat-operators-shl52" Feb 16 14:31:56 crc kubenswrapper[4812]: I0216 14:31:56.032539 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44eeb9b0-40ec-4fa4-8dbe-d7444e3be714-utilities\") pod \"redhat-operators-shl52\" (UID: \"44eeb9b0-40ec-4fa4-8dbe-d7444e3be714\") " pod="openshift-marketplace/redhat-operators-shl52" Feb 16 14:31:56 crc kubenswrapper[4812]: I0216 14:31:56.032665 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsvbx\" (UniqueName: \"kubernetes.io/projected/44eeb9b0-40ec-4fa4-8dbe-d7444e3be714-kube-api-access-dsvbx\") pod \"redhat-operators-shl52\" (UID: \"44eeb9b0-40ec-4fa4-8dbe-d7444e3be714\") " pod="openshift-marketplace/redhat-operators-shl52" Feb 16 14:31:56 crc kubenswrapper[4812]: I0216 14:31:56.033122 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44eeb9b0-40ec-4fa4-8dbe-d7444e3be714-catalog-content\") pod \"redhat-operators-shl52\" (UID: \"44eeb9b0-40ec-4fa4-8dbe-d7444e3be714\") " pod="openshift-marketplace/redhat-operators-shl52" Feb 16 14:31:56 crc kubenswrapper[4812]: I0216 14:31:56.033395 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44eeb9b0-40ec-4fa4-8dbe-d7444e3be714-utilities\") pod \"redhat-operators-shl52\" (UID: \"44eeb9b0-40ec-4fa4-8dbe-d7444e3be714\") " pod="openshift-marketplace/redhat-operators-shl52" Feb 16 14:31:56 crc kubenswrapper[4812]: I0216 14:31:56.066770 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsvbx\" (UniqueName: \"kubernetes.io/projected/44eeb9b0-40ec-4fa4-8dbe-d7444e3be714-kube-api-access-dsvbx\") pod \"redhat-operators-shl52\" (UID: \"44eeb9b0-40ec-4fa4-8dbe-d7444e3be714\") " pod="openshift-marketplace/redhat-operators-shl52" Feb 16 14:31:56 crc kubenswrapper[4812]: I0216 14:31:56.206415 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-shl52" Feb 16 14:31:56 crc kubenswrapper[4812]: I0216 14:31:56.686753 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-shl52"] Feb 16 14:31:57 crc kubenswrapper[4812]: I0216 14:31:57.138901 4812 generic.go:334] "Generic (PLEG): container finished" podID="44eeb9b0-40ec-4fa4-8dbe-d7444e3be714" containerID="afa0c3befbea655a4a44d5871a9830f9452bfcf5e80e6b0f7435146a4ce708e2" exitCode=0 Feb 16 14:31:57 crc kubenswrapper[4812]: I0216 14:31:57.138948 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shl52" event={"ID":"44eeb9b0-40ec-4fa4-8dbe-d7444e3be714","Type":"ContainerDied","Data":"afa0c3befbea655a4a44d5871a9830f9452bfcf5e80e6b0f7435146a4ce708e2"} Feb 16 14:31:57 crc kubenswrapper[4812]: I0216 14:31:57.138975 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shl52" event={"ID":"44eeb9b0-40ec-4fa4-8dbe-d7444e3be714","Type":"ContainerStarted","Data":"7fcc8d4d1481ee482f86f1a0340caec898b1b9a593e8bffa5c672ba47c029e29"} Feb 16 14:31:58 crc kubenswrapper[4812]: I0216 14:31:58.148929 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shl52" event={"ID":"44eeb9b0-40ec-4fa4-8dbe-d7444e3be714","Type":"ContainerStarted","Data":"b612b71bad8ce2ee57a2d408766323a3a60f430de8ccfda4c03a0938a97c144e"} Feb 16 14:31:59 crc kubenswrapper[4812]: I0216 14:31:59.162203 4812 generic.go:334] "Generic (PLEG): container finished" podID="44eeb9b0-40ec-4fa4-8dbe-d7444e3be714" containerID="b612b71bad8ce2ee57a2d408766323a3a60f430de8ccfda4c03a0938a97c144e" exitCode=0 Feb 16 14:31:59 crc kubenswrapper[4812]: I0216 14:31:59.162250 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shl52" event={"ID":"44eeb9b0-40ec-4fa4-8dbe-d7444e3be714","Type":"ContainerDied","Data":"b612b71bad8ce2ee57a2d408766323a3a60f430de8ccfda4c03a0938a97c144e"} Feb 16 14:32:00 crc kubenswrapper[4812]: I0216 14:32:00.173015 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shl52" event={"ID":"44eeb9b0-40ec-4fa4-8dbe-d7444e3be714","Type":"ContainerStarted","Data":"5ccfcc964ca9a8183d52e822457e11f7cdfb71b7e9505370ab43cf394a950f2e"} Feb 16 14:32:00 crc kubenswrapper[4812]: I0216 14:32:00.201603 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-shl52" podStartSLOduration=2.7673142889999998 podStartE2EDuration="5.201581789s" podCreationTimestamp="2026-02-16 14:31:55 +0000 UTC" firstStartedPulling="2026-02-16 14:31:57.140475819 +0000 UTC m=+3606.204806510" lastFinishedPulling="2026-02-16 14:31:59.574743309 +0000 UTC m=+3608.639074010" observedRunningTime="2026-02-16 14:32:00.197197282 +0000 UTC m=+3609.261527983" watchObservedRunningTime="2026-02-16 14:32:00.201581789 +0000 UTC m=+3609.265912490" Feb 16 14:32:04 crc kubenswrapper[4812]: E0216 14:32:04.882710 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:32:06 crc kubenswrapper[4812]: I0216 14:32:06.207577 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-shl52" Feb 16 14:32:06 crc kubenswrapper[4812]: I0216 14:32:06.207634 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-shl52" Feb 16 14:32:07 crc kubenswrapper[4812]: I0216 14:32:07.274416 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-shl52" podUID="44eeb9b0-40ec-4fa4-8dbe-d7444e3be714" containerName="registry-server" probeResult="failure" output=< Feb 16 14:32:07 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Feb 16 14:32:07 crc kubenswrapper[4812]: > Feb 16 14:32:14 crc kubenswrapper[4812]: I0216 14:32:14.548430 4812 patch_prober.go:28] interesting pod/machine-config-daemon-c6mn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:32:14 crc kubenswrapper[4812]: I0216 14:32:14.548933 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:32:16 crc kubenswrapper[4812]: I0216 14:32:16.294342 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-shl52" Feb 16 14:32:16 crc kubenswrapper[4812]: I0216 14:32:16.379165 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-shl52" Feb 16 14:32:16 crc kubenswrapper[4812]: I0216 14:32:16.552270 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-shl52"] Feb 16 14:32:17 crc kubenswrapper[4812]: I0216 14:32:17.377691 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-shl52" podUID="44eeb9b0-40ec-4fa4-8dbe-d7444e3be714" containerName="registry-server" containerID="cri-o://5ccfcc964ca9a8183d52e822457e11f7cdfb71b7e9505370ab43cf394a950f2e" gracePeriod=2 Feb 16 14:32:17 crc kubenswrapper[4812]: I0216 14:32:17.870191 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-shl52" Feb 16 14:32:17 crc kubenswrapper[4812]: I0216 14:32:17.978376 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44eeb9b0-40ec-4fa4-8dbe-d7444e3be714-utilities\") pod \"44eeb9b0-40ec-4fa4-8dbe-d7444e3be714\" (UID: \"44eeb9b0-40ec-4fa4-8dbe-d7444e3be714\") " Feb 16 14:32:17 crc kubenswrapper[4812]: I0216 14:32:17.978540 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44eeb9b0-40ec-4fa4-8dbe-d7444e3be714-catalog-content\") pod \"44eeb9b0-40ec-4fa4-8dbe-d7444e3be714\" (UID: \"44eeb9b0-40ec-4fa4-8dbe-d7444e3be714\") " Feb 16 14:32:17 crc kubenswrapper[4812]: I0216 14:32:17.978759 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsvbx\" (UniqueName: \"kubernetes.io/projected/44eeb9b0-40ec-4fa4-8dbe-d7444e3be714-kube-api-access-dsvbx\") pod \"44eeb9b0-40ec-4fa4-8dbe-d7444e3be714\" (UID: \"44eeb9b0-40ec-4fa4-8dbe-d7444e3be714\") " Feb 16 14:32:17 crc kubenswrapper[4812]: I0216 14:32:17.979570 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44eeb9b0-40ec-4fa4-8dbe-d7444e3be714-utilities" (OuterVolumeSpecName: "utilities") pod "44eeb9b0-40ec-4fa4-8dbe-d7444e3be714" (UID: "44eeb9b0-40ec-4fa4-8dbe-d7444e3be714"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:32:17 crc kubenswrapper[4812]: I0216 14:32:17.987973 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44eeb9b0-40ec-4fa4-8dbe-d7444e3be714-kube-api-access-dsvbx" (OuterVolumeSpecName: "kube-api-access-dsvbx") pod "44eeb9b0-40ec-4fa4-8dbe-d7444e3be714" (UID: "44eeb9b0-40ec-4fa4-8dbe-d7444e3be714"). InnerVolumeSpecName "kube-api-access-dsvbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:32:18 crc kubenswrapper[4812]: I0216 14:32:18.081164 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsvbx\" (UniqueName: \"kubernetes.io/projected/44eeb9b0-40ec-4fa4-8dbe-d7444e3be714-kube-api-access-dsvbx\") on node \"crc\" DevicePath \"\"" Feb 16 14:32:18 crc kubenswrapper[4812]: I0216 14:32:18.081204 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44eeb9b0-40ec-4fa4-8dbe-d7444e3be714-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:32:18 crc kubenswrapper[4812]: I0216 14:32:18.126542 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44eeb9b0-40ec-4fa4-8dbe-d7444e3be714-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "44eeb9b0-40ec-4fa4-8dbe-d7444e3be714" (UID: "44eeb9b0-40ec-4fa4-8dbe-d7444e3be714"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:32:18 crc kubenswrapper[4812]: I0216 14:32:18.183866 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44eeb9b0-40ec-4fa4-8dbe-d7444e3be714-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:32:18 crc kubenswrapper[4812]: I0216 14:32:18.390344 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-shl52" Feb 16 14:32:18 crc kubenswrapper[4812]: I0216 14:32:18.390344 4812 generic.go:334] "Generic (PLEG): container finished" podID="44eeb9b0-40ec-4fa4-8dbe-d7444e3be714" containerID="5ccfcc964ca9a8183d52e822457e11f7cdfb71b7e9505370ab43cf394a950f2e" exitCode=0 Feb 16 14:32:18 crc kubenswrapper[4812]: I0216 14:32:18.390345 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shl52" event={"ID":"44eeb9b0-40ec-4fa4-8dbe-d7444e3be714","Type":"ContainerDied","Data":"5ccfcc964ca9a8183d52e822457e11f7cdfb71b7e9505370ab43cf394a950f2e"} Feb 16 14:32:18 crc kubenswrapper[4812]: I0216 14:32:18.390537 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shl52" event={"ID":"44eeb9b0-40ec-4fa4-8dbe-d7444e3be714","Type":"ContainerDied","Data":"7fcc8d4d1481ee482f86f1a0340caec898b1b9a593e8bffa5c672ba47c029e29"} Feb 16 14:32:18 crc kubenswrapper[4812]: I0216 14:32:18.390565 4812 scope.go:117] "RemoveContainer" containerID="5ccfcc964ca9a8183d52e822457e11f7cdfb71b7e9505370ab43cf394a950f2e" Feb 16 14:32:18 crc kubenswrapper[4812]: I0216 14:32:18.412914 4812 scope.go:117] "RemoveContainer" containerID="b612b71bad8ce2ee57a2d408766323a3a60f430de8ccfda4c03a0938a97c144e" Feb 16 14:32:18 crc kubenswrapper[4812]: I0216 14:32:18.460277 4812 scope.go:117] "RemoveContainer" containerID="afa0c3befbea655a4a44d5871a9830f9452bfcf5e80e6b0f7435146a4ce708e2" Feb 16 14:32:18 crc kubenswrapper[4812]: I0216 14:32:18.472622 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-shl52"] Feb 16 14:32:18 crc kubenswrapper[4812]: I0216 14:32:18.488065 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-shl52"] Feb 16 14:32:18 crc kubenswrapper[4812]: I0216 14:32:18.501629 4812 scope.go:117] "RemoveContainer" containerID="5ccfcc964ca9a8183d52e822457e11f7cdfb71b7e9505370ab43cf394a950f2e" Feb 16 14:32:18 crc kubenswrapper[4812]: E0216 14:32:18.502319 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ccfcc964ca9a8183d52e822457e11f7cdfb71b7e9505370ab43cf394a950f2e\": container with ID starting with 5ccfcc964ca9a8183d52e822457e11f7cdfb71b7e9505370ab43cf394a950f2e not found: ID does not exist" containerID="5ccfcc964ca9a8183d52e822457e11f7cdfb71b7e9505370ab43cf394a950f2e" Feb 16 14:32:18 crc kubenswrapper[4812]: I0216 14:32:18.502388 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ccfcc964ca9a8183d52e822457e11f7cdfb71b7e9505370ab43cf394a950f2e"} err="failed to get container status \"5ccfcc964ca9a8183d52e822457e11f7cdfb71b7e9505370ab43cf394a950f2e\": rpc error: code = NotFound desc = could not find container \"5ccfcc964ca9a8183d52e822457e11f7cdfb71b7e9505370ab43cf394a950f2e\": container with ID starting with 5ccfcc964ca9a8183d52e822457e11f7cdfb71b7e9505370ab43cf394a950f2e not found: ID does not exist" Feb 16 14:32:18 crc kubenswrapper[4812]: I0216 14:32:18.502421 4812 scope.go:117] "RemoveContainer" containerID="b612b71bad8ce2ee57a2d408766323a3a60f430de8ccfda4c03a0938a97c144e" Feb 16 14:32:18 crc kubenswrapper[4812]: E0216 14:32:18.502929 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b612b71bad8ce2ee57a2d408766323a3a60f430de8ccfda4c03a0938a97c144e\": container with ID starting with b612b71bad8ce2ee57a2d408766323a3a60f430de8ccfda4c03a0938a97c144e not found: ID does not exist" containerID="b612b71bad8ce2ee57a2d408766323a3a60f430de8ccfda4c03a0938a97c144e" Feb 16 14:32:18 crc kubenswrapper[4812]: I0216 14:32:18.502980 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b612b71bad8ce2ee57a2d408766323a3a60f430de8ccfda4c03a0938a97c144e"} err="failed to get container status \"b612b71bad8ce2ee57a2d408766323a3a60f430de8ccfda4c03a0938a97c144e\": rpc error: code = NotFound desc = could not find container \"b612b71bad8ce2ee57a2d408766323a3a60f430de8ccfda4c03a0938a97c144e\": container with ID starting with b612b71bad8ce2ee57a2d408766323a3a60f430de8ccfda4c03a0938a97c144e not found: ID does not exist" Feb 16 14:32:18 crc kubenswrapper[4812]: I0216 14:32:18.503016 4812 scope.go:117] "RemoveContainer" containerID="afa0c3befbea655a4a44d5871a9830f9452bfcf5e80e6b0f7435146a4ce708e2" Feb 16 14:32:18 crc kubenswrapper[4812]: E0216 14:32:18.503657 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afa0c3befbea655a4a44d5871a9830f9452bfcf5e80e6b0f7435146a4ce708e2\": container with ID starting with afa0c3befbea655a4a44d5871a9830f9452bfcf5e80e6b0f7435146a4ce708e2 not found: ID does not exist" containerID="afa0c3befbea655a4a44d5871a9830f9452bfcf5e80e6b0f7435146a4ce708e2" Feb 16 14:32:18 crc kubenswrapper[4812]: I0216 14:32:18.503699 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afa0c3befbea655a4a44d5871a9830f9452bfcf5e80e6b0f7435146a4ce708e2"} err="failed to get container status \"afa0c3befbea655a4a44d5871a9830f9452bfcf5e80e6b0f7435146a4ce708e2\": rpc error: code = NotFound desc = could not find container \"afa0c3befbea655a4a44d5871a9830f9452bfcf5e80e6b0f7435146a4ce708e2\": container with ID starting with afa0c3befbea655a4a44d5871a9830f9452bfcf5e80e6b0f7435146a4ce708e2 not found: ID does not exist" Feb 16 14:32:18 crc kubenswrapper[4812]: E0216 14:32:18.881558 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:32:19 crc kubenswrapper[4812]: I0216 14:32:19.894603 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44eeb9b0-40ec-4fa4-8dbe-d7444e3be714" path="/var/lib/kubelet/pods/44eeb9b0-40ec-4fa4-8dbe-d7444e3be714/volumes" Feb 16 14:32:33 crc kubenswrapper[4812]: E0216 14:32:33.882306 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:32:44 crc kubenswrapper[4812]: I0216 14:32:44.552009 4812 patch_prober.go:28] interesting pod/machine-config-daemon-c6mn9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:32:44 crc kubenswrapper[4812]: I0216 14:32:44.552569 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:32:44 crc kubenswrapper[4812]: I0216 14:32:44.552611 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" Feb 16 14:32:44 crc kubenswrapper[4812]: I0216 14:32:44.553339 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"269fbf67f93ebc3710ed9092cae6b928acc7e4620f9ea69535e0ad57766470d5"} pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 14:32:44 crc kubenswrapper[4812]: I0216 14:32:44.553388 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerName="machine-config-daemon" containerID="cri-o://269fbf67f93ebc3710ed9092cae6b928acc7e4620f9ea69535e0ad57766470d5" gracePeriod=600 Feb 16 14:32:44 crc kubenswrapper[4812]: I0216 14:32:44.703010 4812 generic.go:334] "Generic (PLEG): container finished" podID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" containerID="269fbf67f93ebc3710ed9092cae6b928acc7e4620f9ea69535e0ad57766470d5" exitCode=0 Feb 16 14:32:44 crc kubenswrapper[4812]: I0216 14:32:44.703097 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" event={"ID":"3c55e49a-a30d-4950-a690-c33d9f8a31e0","Type":"ContainerDied","Data":"269fbf67f93ebc3710ed9092cae6b928acc7e4620f9ea69535e0ad57766470d5"} Feb 16 14:32:44 crc kubenswrapper[4812]: I0216 14:32:44.703200 4812 scope.go:117] "RemoveContainer" containerID="c5f58f6a974f79b6081c75e064f880be69c771923e683ab20b29c5f39942ca14" Feb 16 14:32:44 crc kubenswrapper[4812]: E0216 14:32:44.706650 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:32:45 crc kubenswrapper[4812]: I0216 14:32:45.719403 4812 scope.go:117] "RemoveContainer" containerID="269fbf67f93ebc3710ed9092cae6b928acc7e4620f9ea69535e0ad57766470d5" Feb 16 14:32:45 crc kubenswrapper[4812]: E0216 14:32:45.720306 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:32:45 crc kubenswrapper[4812]: E0216 14:32:45.880931 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:32:56 crc kubenswrapper[4812]: E0216 14:32:56.880733 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:32:59 crc kubenswrapper[4812]: I0216 14:32:59.879972 4812 scope.go:117] "RemoveContainer" containerID="269fbf67f93ebc3710ed9092cae6b928acc7e4620f9ea69535e0ad57766470d5" Feb 16 14:32:59 crc kubenswrapper[4812]: E0216 14:32:59.881039 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:33:10 crc kubenswrapper[4812]: E0216 14:33:10.882566 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:33:11 crc kubenswrapper[4812]: I0216 14:33:11.892123 4812 scope.go:117] "RemoveContainer" containerID="269fbf67f93ebc3710ed9092cae6b928acc7e4620f9ea69535e0ad57766470d5" Feb 16 14:33:11 crc kubenswrapper[4812]: E0216 14:33:11.892648 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:33:23 crc kubenswrapper[4812]: I0216 14:33:23.879761 4812 scope.go:117] "RemoveContainer" containerID="269fbf67f93ebc3710ed9092cae6b928acc7e4620f9ea69535e0ad57766470d5" Feb 16 14:33:23 crc kubenswrapper[4812]: E0216 14:33:23.880691 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:33:23 crc kubenswrapper[4812]: E0216 14:33:23.883330 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:33:34 crc kubenswrapper[4812]: I0216 14:33:34.880248 4812 scope.go:117] "RemoveContainer" containerID="269fbf67f93ebc3710ed9092cae6b928acc7e4620f9ea69535e0ad57766470d5" Feb 16 14:33:34 crc kubenswrapper[4812]: E0216 14:33:34.881613 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:33:36 crc kubenswrapper[4812]: E0216 14:33:36.881518 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:33:47 crc kubenswrapper[4812]: E0216 14:33:47.884260 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:33:48 crc kubenswrapper[4812]: I0216 14:33:48.879489 4812 scope.go:117] "RemoveContainer" containerID="269fbf67f93ebc3710ed9092cae6b928acc7e4620f9ea69535e0ad57766470d5" Feb 16 14:33:48 crc kubenswrapper[4812]: E0216 14:33:48.880054 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:33:59 crc kubenswrapper[4812]: I0216 14:33:59.884085 4812 scope.go:117] "RemoveContainer" containerID="269fbf67f93ebc3710ed9092cae6b928acc7e4620f9ea69535e0ad57766470d5" Feb 16 14:33:59 crc kubenswrapper[4812]: E0216 14:33:59.888003 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:34:00 crc kubenswrapper[4812]: E0216 14:34:00.884997 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:34:10 crc kubenswrapper[4812]: I0216 14:34:10.879826 4812 scope.go:117] "RemoveContainer" containerID="269fbf67f93ebc3710ed9092cae6b928acc7e4620f9ea69535e0ad57766470d5" Feb 16 14:34:10 crc kubenswrapper[4812]: E0216 14:34:10.880986 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:34:12 crc kubenswrapper[4812]: E0216 14:34:12.882529 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:34:23 crc kubenswrapper[4812]: I0216 14:34:23.880676 4812 scope.go:117] "RemoveContainer" containerID="269fbf67f93ebc3710ed9092cae6b928acc7e4620f9ea69535e0ad57766470d5" Feb 16 14:34:23 crc kubenswrapper[4812]: E0216 14:34:23.881650 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:34:27 crc kubenswrapper[4812]: E0216 14:34:27.881898 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:34:38 crc kubenswrapper[4812]: I0216 14:34:38.334694 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qckvs"] Feb 16 14:34:38 crc kubenswrapper[4812]: E0216 14:34:38.335746 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44eeb9b0-40ec-4fa4-8dbe-d7444e3be714" containerName="extract-utilities" Feb 16 14:34:38 crc kubenswrapper[4812]: I0216 14:34:38.335767 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="44eeb9b0-40ec-4fa4-8dbe-d7444e3be714" containerName="extract-utilities" Feb 16 14:34:38 crc kubenswrapper[4812]: E0216 14:34:38.335785 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44eeb9b0-40ec-4fa4-8dbe-d7444e3be714" containerName="extract-content" Feb 16 14:34:38 crc kubenswrapper[4812]: I0216 14:34:38.335792 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="44eeb9b0-40ec-4fa4-8dbe-d7444e3be714" containerName="extract-content" Feb 16 14:34:38 crc kubenswrapper[4812]: E0216 14:34:38.335847 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44eeb9b0-40ec-4fa4-8dbe-d7444e3be714" containerName="registry-server" Feb 16 14:34:38 crc kubenswrapper[4812]: I0216 14:34:38.335855 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="44eeb9b0-40ec-4fa4-8dbe-d7444e3be714" containerName="registry-server" Feb 16 14:34:38 crc kubenswrapper[4812]: I0216 14:34:38.336095 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="44eeb9b0-40ec-4fa4-8dbe-d7444e3be714" containerName="registry-server" Feb 16 14:34:38 crc kubenswrapper[4812]: I0216 14:34:38.337939 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qckvs" Feb 16 14:34:38 crc kubenswrapper[4812]: I0216 14:34:38.354093 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qckvs"] Feb 16 14:34:38 crc kubenswrapper[4812]: I0216 14:34:38.525772 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24738713-7a92-476a-ae95-c32f3c8e4e7a-catalog-content\") pod \"certified-operators-qckvs\" (UID: \"24738713-7a92-476a-ae95-c32f3c8e4e7a\") " pod="openshift-marketplace/certified-operators-qckvs" Feb 16 14:34:38 crc kubenswrapper[4812]: I0216 14:34:38.525972 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24738713-7a92-476a-ae95-c32f3c8e4e7a-utilities\") pod \"certified-operators-qckvs\" (UID: \"24738713-7a92-476a-ae95-c32f3c8e4e7a\") " pod="openshift-marketplace/certified-operators-qckvs" Feb 16 14:34:38 crc kubenswrapper[4812]: I0216 14:34:38.526091 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv4mt\" (UniqueName: \"kubernetes.io/projected/24738713-7a92-476a-ae95-c32f3c8e4e7a-kube-api-access-xv4mt\") pod \"certified-operators-qckvs\" (UID: \"24738713-7a92-476a-ae95-c32f3c8e4e7a\") " pod="openshift-marketplace/certified-operators-qckvs" Feb 16 14:34:38 crc kubenswrapper[4812]: I0216 14:34:38.627741 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv4mt\" (UniqueName: \"kubernetes.io/projected/24738713-7a92-476a-ae95-c32f3c8e4e7a-kube-api-access-xv4mt\") pod \"certified-operators-qckvs\" (UID: \"24738713-7a92-476a-ae95-c32f3c8e4e7a\") " pod="openshift-marketplace/certified-operators-qckvs" Feb 16 14:34:38 crc kubenswrapper[4812]: I0216 14:34:38.627822 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24738713-7a92-476a-ae95-c32f3c8e4e7a-catalog-content\") pod \"certified-operators-qckvs\" (UID: \"24738713-7a92-476a-ae95-c32f3c8e4e7a\") " pod="openshift-marketplace/certified-operators-qckvs" Feb 16 14:34:38 crc kubenswrapper[4812]: I0216 14:34:38.627940 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24738713-7a92-476a-ae95-c32f3c8e4e7a-utilities\") pod \"certified-operators-qckvs\" (UID: \"24738713-7a92-476a-ae95-c32f3c8e4e7a\") " pod="openshift-marketplace/certified-operators-qckvs" Feb 16 14:34:38 crc kubenswrapper[4812]: I0216 14:34:38.628644 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24738713-7a92-476a-ae95-c32f3c8e4e7a-utilities\") pod \"certified-operators-qckvs\" (UID: \"24738713-7a92-476a-ae95-c32f3c8e4e7a\") " pod="openshift-marketplace/certified-operators-qckvs" Feb 16 14:34:38 crc kubenswrapper[4812]: I0216 14:34:38.630726 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24738713-7a92-476a-ae95-c32f3c8e4e7a-catalog-content\") pod \"certified-operators-qckvs\" (UID: \"24738713-7a92-476a-ae95-c32f3c8e4e7a\") " pod="openshift-marketplace/certified-operators-qckvs" Feb 16 14:34:38 crc kubenswrapper[4812]: I0216 14:34:38.651132 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv4mt\" (UniqueName: \"kubernetes.io/projected/24738713-7a92-476a-ae95-c32f3c8e4e7a-kube-api-access-xv4mt\") pod \"certified-operators-qckvs\" (UID: \"24738713-7a92-476a-ae95-c32f3c8e4e7a\") " pod="openshift-marketplace/certified-operators-qckvs" Feb 16 14:34:38 crc kubenswrapper[4812]: I0216 14:34:38.669951 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qckvs" Feb 16 14:34:38 crc kubenswrapper[4812]: I0216 14:34:38.882107 4812 scope.go:117] "RemoveContainer" containerID="269fbf67f93ebc3710ed9092cae6b928acc7e4620f9ea69535e0ad57766470d5" Feb 16 14:34:38 crc kubenswrapper[4812]: E0216 14:34:38.882802 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6mn9_openshift-machine-config-operator(3c55e49a-a30d-4950-a690-c33d9f8a31e0)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6mn9" podUID="3c55e49a-a30d-4950-a690-c33d9f8a31e0" Feb 16 14:34:39 crc kubenswrapper[4812]: I0216 14:34:39.151608 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qckvs"] Feb 16 14:34:39 crc kubenswrapper[4812]: I0216 14:34:39.249779 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qckvs" event={"ID":"24738713-7a92-476a-ae95-c32f3c8e4e7a","Type":"ContainerStarted","Data":"05fa110e5f6839f6c825fcb2e29419973944a7438e7ec7d45b6967f4c61495c2"} Feb 16 14:34:40 crc kubenswrapper[4812]: I0216 14:34:40.282121 4812 generic.go:334] "Generic (PLEG): container finished" podID="24738713-7a92-476a-ae95-c32f3c8e4e7a" containerID="4e49d7648453c88938441fca77d8ef5c67ffdb97ef8ec54539db06287f57f55e" exitCode=0 Feb 16 14:34:40 crc kubenswrapper[4812]: I0216 14:34:40.282217 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qckvs" event={"ID":"24738713-7a92-476a-ae95-c32f3c8e4e7a","Type":"ContainerDied","Data":"4e49d7648453c88938441fca77d8ef5c67ffdb97ef8ec54539db06287f57f55e"} Feb 16 14:34:42 crc kubenswrapper[4812]: I0216 14:34:42.307271 4812 generic.go:334] "Generic (PLEG): container finished" podID="24738713-7a92-476a-ae95-c32f3c8e4e7a" containerID="db7496687251fcc39240de082430a906129f2e653896549944b8cefd0cd1b30c" exitCode=0 Feb 16 14:34:42 crc kubenswrapper[4812]: I0216 14:34:42.307532 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qckvs" event={"ID":"24738713-7a92-476a-ae95-c32f3c8e4e7a","Type":"ContainerDied","Data":"db7496687251fcc39240de082430a906129f2e653896549944b8cefd0cd1b30c"} Feb 16 14:34:42 crc kubenswrapper[4812]: E0216 14:34:42.881510 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-krnzs" podUID="a7d4eae6-781f-4675-a6c3-ee0f1589c735" Feb 16 14:34:43 crc kubenswrapper[4812]: I0216 14:34:43.322596 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qckvs" event={"ID":"24738713-7a92-476a-ae95-c32f3c8e4e7a","Type":"ContainerStarted","Data":"c210ccec1ff0d1f7b4827f9df7168c52abd98fe1cedaaf992782b7557dd24ed2"} Feb 16 14:34:43 crc kubenswrapper[4812]: I0216 14:34:43.345540 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qckvs" podStartSLOduration=2.9181688599999998 podStartE2EDuration="5.345515274s" podCreationTimestamp="2026-02-16 14:34:38 +0000 UTC" firstStartedPulling="2026-02-16 14:34:40.286174592 +0000 UTC m=+3769.350505323" lastFinishedPulling="2026-02-16 14:34:42.713521036 +0000 UTC m=+3771.777851737" observedRunningTime="2026-02-16 14:34:43.33906338 +0000 UTC m=+3772.403394091" watchObservedRunningTime="2026-02-16 14:34:43.345515274 +0000 UTC m=+3772.409845985"